Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,500 | 3,268 | Scene Segmentation with Conditional Random Fields
Learned from Partially Labeled Images
Jakob Verbeek and Bill Triggs
INRIA and Laboratoire Jean Kuntzmann, 655 avenue de l?Europe, 38330 Montbonnot, France
Abstract
Conditional Random Fields (CRFs) are an effective tool for a variety of different
data segmentation and labeling tasks including visual scene interpretation, which
seeks to partition images into their constituent semantic-level regions and assign
appropriate class labels to each region. For accurate labeling it is important to
capture the global context of the image as well as local information. We introduce a CRF based scene labeling model that incorporates both local features
and features aggregated over the whole image or large sections of it. Secondly,
traditional CRF learning requires fully labeled datasets which can be costly and
troublesome to produce. We introduce a method for learning CRFs from datasets
with many unlabeled nodes by marginalizing out the unknown labels so that the
log-likelihood of the known ones can be maximized by gradient ascent. Loopy
Belief Propagation is used to approximate the marginals needed for the gradient and log-likelihood calculations and the Bethe free-energy approximation to
the log-likelihood is monitored to control the step size. Our experimental results
show that effective models can be learned from fragmentary labelings and that
incorporating top-down aggregate features significantly improves the segmentations. The resulting segmentations are compared to the state-of-the-art on three
different image datasets.
1
Introduction
In visual scene interpretation the goal is to assign image pixels to one of several semantic classes or
scene elements, thus jointly performing segmentation and recognition. This is useful in a variety of
applications ranging from keyword-based image retrieval (using the segmentation to automatically
index images) to autonomous vehicle navigation [1].
Random field approaches are a popular way of modelling spatial regularities in images. Their applications range from low-level noise reduction [2] to high-level object or category recognition (this
paper) and semi-automatic object segmentation [3]. Early work focused on generative modeling using Markov Random Fields, but recently Conditional Random Field (CRF) models [4] have become
popular owing to their ability to directly predict the segmentation/labeling given the observed image
and the ease with which arbitrary functions of the observed features can be incorporated into the
training process. CRF models can be applied either at the pixel-level [5, 6, 7] or at the coarser level
of super-pixels or patches [8, 9, 10]. In this paper we label images at the level of small patches, using
CRF models that incorporate both purely local (single patch) feature functions and more global ?context capturing? feature functions that depend on aggregates of observations over the whole image or
large regions.
Traditional CRF training algorithms require fully-labeled training data. In practice it is difficult
and time-consuming to label every pixel in an image and most of the available image interpretation
datasets contain unlabeled pixels. Working at the patch level exacerbates this problem because
many patches contain several different pixel-level labels. Our CRF training algorithm handles this
by allowing partial and mixed labelings and optimizing the probability for the model segmentation
to be consistent with the given labeling constraints.
1
The rest of the paper is organized as follows: we describe our CRF model in Section 2, present our
training algorithm in Section 3, provide experimental results in Section 4, and conclude in Section 5.
2
A Conditional Random Field using Local and Global Image Features
We represent images as rectangular grids of patches at a single scale, associating a hidden class label
with each patch. Our CRF models incorporate 4-neighbor couplings between patch labels. The local
image content of each patch is encoded using texture, color and position descriptors as in [10]. For
texture we compute the 128-dimensional SIFT descriptor [11] of the patch and vector quantize it
by nearest-neighbour assignement against a ks = 1000 word texton dictionary learned by k-means
clustering of all patches in the training dataset. Similarly, for color we take the 36-D hue descriptor
of [12] and vector quantize it against a kh = 100 word color dictionary learned from the training
set. Position is encoded by overlaying the image with an m ? m grid of cells (m = 8) and using the
index of the cell in which the patch falls as its position feature. Each patch is thus coded by three
binary vectors with respectively ks , kh and kp = m2 bits, each with a single bit set corresponding to
the observed visual word. Our CRF observation functions are simple linear functions of these three
vectors. Generatively, the three modalities are modelled as being independent given the patch label.
The naive Bayes model of the image omits the 4-neighbor couplings and thus assumes that each
patch label depends only on its three observation functions. Parameter estimation reduces to trivially
counting observed visual word frequencies for each label class and feature type. On the MSRC 9class image dataset this model returns an average classification rate of 67.1% (see Section 4), so
isolated appearance alone does not suffice for reliable patch labeling.
In recent years models based on histograms of visual words have proven very successful for image categorization (deciding whether or not the image as a whole belongs to a given category of
scenes) [13]. Motivated by this, many of our models take the global image context into account
by including observation functions based on image-wide histograms of the visual words of their
patches. The hope is that this will help to overcome the ambiguities that arise when patches are classified in isolation. To this end, we define a conditional model for patch labels that incorporates both
local patch level features and global aggregate features. Let xi ? {1, . . . , C} denote the label of
patch i, yi denote the W -dimensional concatenated binary indicator vector of its three visual words
(W
P = ks + hh + kp ), and h denote the normalized histogram of all visual words in the image, i.e.
patches i yi normalized to sum one. The conditional probablity of the label xi is then modeled as
P
W
p(xi = l|yi , h) ? exp ? w=1 (?wl yiw + ?wl hw ) ,
(1)
where ?wl , ?wl are W ? C matrices of coefficients to be learned. We can think of this as a multiplicative combination of a local classifier based on the patch-level observation yi and a global
context or bias based on the image-wide histogram h.
To account for correlations among spatially neighboring patch labels, we add couplings between the
labels of neighboring patches to the single patch model (1). Let X denote the collection of all patch
labels in the image and Y denote the collected patch features. Then our CRF model for the coupled
patch labels is:
p(X|Y ) ? exp ? E(X|Y ) ,
(2)
E(X|Y ) =
W
XX
i
(?wxi yiw + ?wxi hw ) +
w=1
X
?ij (xi , xj ),
(3)
i?j
where i ? j denotes the set of all adjacent (4-neighbor) pairs of patches i, j. We can write E(X|Y )
without explicitly including h as an argument because h is a deterministic function of Y .
We have explored two forms of pairwise potential:
?ij (xi , xj ) = ?xi ,xj [xi 6= xj ], and ?ij (xi , xj ) = (? + ? dij ) [xi 6= xj ],
where [?] is one if its argument is true and zero otherwise, and dij is some similarity measure over the
appearance of the patches i and j. In the first form, ?xi ,xj is a general symmetric weight matrix that
needs to be learned. The second potential is designed to favor label transitions at image locations
with high contrast. As in [3] we use dij = exp(?kzi ? zj k2 /(2?)), with zi ? IR3 denoting the
average RGB value in the patch and ? = hkzi ? zj k2 i, the average L2 norm between neighboring
RGB values in the image. Models using the first form of potential will be denoted ?CRF?? and
those using the second will be denoted ?CRF? ?, or ?CRF?? if ? has been fixed to zero. A graphical
representation of the model is given in Figure 1.
2
Figure 1: Graphical representation of the
model with a single image- wide aggregate
feature function denoted by h. Squares
denote feature functions and circles denote variable nodes xi (here connected
in a 4- neighbor grid covering the image). Arrows denote single node potentials due to feature functions, and undirected edges represent pairwise potentials.
The dashed lines indicate the aggregation
of the single- patch observations yi into h.
h
x
x
x
x
y
y
3
x
x
y
y
x
x
y
y
y
y
Estimating a Conditional Random Field from Partially Labeled Images
Conditional models p(X|Y ) are usually trained by maximizing the log- likelihood of correct classifiPN
cation of the training data, n=1 log p(Xn |Yn ). This requires completely labeled training data, i.e.
a collection of N pairs (Xn , Yn )n=1,...,N with completely known Xn . In practice this is restrictive
and it is useful to develop methods that can learn from partially labeled examples ? images that
include either completely unlabeled patches or ones with a retricted but nontrivial set of possible
labels. Formally, we will assume that an incomplete labeling X is known to belong to an associated set of admissible labelings A and we maximise the log- likelihood for the model to predict any
labeling in A:
X
L = log p(X ? A | Y ) = log
p(X|Y )
X?A
= log
X
exp ? E(X|Y )
X?A
? log
X
exp ? E(X|Y )
.
(4)
X
Note that the log- likelihood is the difference between the partition functions of the restricted and
unrestricted labelings, p(X | Y, X ? A) and p(X|Y ). For completely labeled training images this
reduces trivially to the standard labeled log- likelihood, while for partially labeled ones both terms
of the log- likelihood are typically intractable because the set A contains O(C k ) distinct labelings
X where k is the number of unlabeled patches and C is the number of possible labels. Similarly,
to find maximum likelihood parameter estimates using gradient descent we need to calculate partial
derivatives with respect to each parameter ? and in general both terms are again intractable:
?E(X|Y )
X
?L
=
.
(5)
p(X|Y ) ? p(X | Y, X ? A)
??
??
X
However the situation is not actually much worse
P than the fully- labeled case. In any case we need to
approximate the full partition function log( X exp ?E(X|Y )) or
P its derivatives and any method
for doing so can also be applied to the more restricted sum log( X?A exp ?E(X|Y )) to give a
contrast- of- partition- function based approximation. Here we will use the Bethe free energy approximation for both partition functions [14]:
(6)
L ? FBethe p(X|Y ) ? FBethe p(X | Y, X ? A) .
The Bethe approximation is a variational method based on approximating the complete distribution p(X|Y ) as the product of its pair- wise marginals (normalized by single- node marginals) that
would apply if the graph were a tree. The necessary marginals are approximated using Loopy Belief
Propagation (LBP) and the log- likelihood and its gradient are then evaluated using them [14]. Here
LBP is run twice (with the singleton marginals initialized from the single node potentials), once
to estimate the marginals of p(X|Y ) and once for p(X | Y, X ? A). We used standard undamped
LBP with uniform initial messages without encountering any convergence problems. In practice
the approximate gradient and objective were consistent enough to allow parameter estimation using
standard conjugate gradient optimization with adaptive step lengths based on monitoring the Bethe
free- energy.
Comparison with excision of unlabeled nodes. The above training procedure requires two runs
of loopy BP. A simple and often- used alternative is to discard unlabeled patches by excising nodes
3
Class and frequency
Model
Building
16.1%
Grass
32.4%
Tree
12.3%
Cow
6.2%
Sky
15.4%
Plane
2.2%
Face
4.4%
Car
9.5%
Bike
1.5%
Per Pixel
IND loc only
IND loc+glo
CRF? loc only
CRF? loc+glo
CRF? loc+glo del unlabeled
CRF? loc only
CRF? loc+glo
CRF? loc only
CRF? loc+glo
Schroff et al. [15]
PLSA-MRF [10]
63.8
69.2
75.0
73.6
84.6
71.4
74.6
65.6
75.0
56.7
74.0
88.3
88.1
88.6
91.1
91.0
86.8
88.7
85.4
88.5
84.8
88.7
51.9
70.1
72.7
82.1
76.6
80.2
82.5
78.2
82.3
76.4
64.4
56.7
69.3
70.5
73.6
70.6
81.0
82.2
74.3
81.0
83.8
77.4
88.4
89.1
94.7
95.7
91.3
94.2
93.9
95.4
94.4
81.1
95.7
28.6
44.8
55.5
78.3
43.9
63.8
61.7
61.8
60.6
53.8
92.2
64.0
78.1
83.2
89.5
77.8
86.3
88.8
84.8
88.7
68.5
88.8
60.7
67.8
81.4
84.5
71.4
85.7
82.8
85.2
82.2
71.4
81.1
24.9
40.8
69.1
81.4
30.6
77.3
76.8
79.4
76.1
72.0
78.7
67.1
74.4
80.7
84.9
78.4
82.3
83.3
80.3
83.1
75.2
82.3
Table 1: Classification accuracies on the 9 MSRC classes using different models. For each class its
frequency in the ground truth labeling is also given.
that correspond to unlabeled or partially labeled patches from the graph. This leaves a random
field with one or more completely labeled connected components whose log-likelihood p(X 0 |Y 0 )
we maximize directly using gradient based methods. Equivalently, we can use the complete model
but set all of the pair-wise potentials connected to unlabeled nodes to zero: this decouples the labels
of the unlabeled nodes from the rest of the field. As a result p(X|Y ) and p(X | Y, X ? A) are
equivalent for the unlabeled nodes and their contribution to the log-likelihood in Eq. (4) and the
gradient in Eq. (5) vanishes.
The problem with this approach is that it systematically overestimates spatial coupling strengths.
Looking at the training labelings in Figure 3 and Figure 4, we see that pixels near class boundaries often remain unlabeled. Since we leave patches unlabeled if they contain unlabeled pixels,
label transitions are underrepresented in the training data, which causes the strength of the pairwise
couplings to be greatly overestimated. In contrast, the full CRF model provides realistic estimates
because it is forced to include a (fully coupled) label transition somewhere in the unlabeled region.
4
Experimental Results
This section analyzes the performance of our segmentation models in detail and compares it to other
existing methods. In our first set of experiments we use the Microsoft Research Cambridge (MSRC)
dataset1 . This consists of 240 images of 213 ? 320 pixels and their partial pixel-level labelings.
The labelings assign pixels to one of nine classes: building, grass, tree, cow, sky, plane, face, car,
and bike. About 30% of the pixels are unlabeled. Some sample images and labelings are shown in
Figure 4. In our experiments we divide the dataset into 120 images for training and 120 for testing,
reporting average results over 20 random train-test partitions. We used 20 ? 20 pixel patches with
centers at 10 pixel intervals. (For the patch size see the red disc in Figure 4).
To obtain a labeling of the patches, pixels are assigned to the nearest patch center. Patches are allowed to have any label seen among their pixels, with unlabeled pixels being allowed to have any
label. Learning and inference takes place at the patch level. To map the patch-level segmentation
back to the pixel level we assign each pixel the marginal of the patch with the nearest center. (In Figure 4 the segmentations were post-processed by a applying a Gaussian filter over the pixel marginals
with the scale set to half the patch spacing). The performance metrics ignore unlabeled test pixels.
The relative contributions of the different components of our model are summarized in Table 1.
Models that incorporate 4-neighbor spatial couplings are denoted ?CRF? while ones that incorporate
only (local or global) patch-level potentials are denoted ?IND?. Models that include global aggregate
features are denoted ?loc+glo?, while ones that include only on local patch-level features are denoted
?loc only?.
1
Available from http://research.microsoft.com/vision/cambridge/recognition.
4
85
Figure 2: Classification accuracy as a
function of the aggregation fineness c, for
the ?IND? (individual patch) classifier using a single training and test set. Aggregate features (AF) were computed in each
cell of a c ? c image partition. Results are
given for models with no AFs (solid line),
with AFs of a single c (dotted curve), with
AFs on grids 1?1 up to c?c (solid curve),
and with AFs on grids c ? c up to 10 ? 10
(dashed curve).
Accuracy
80
only c
1 to c
75
c to 10
local only
70
0
2
4
6
8
10
C
Benefits of aggregate features. The first main conclusion is that including global aggregate features
helps, for example improving the average classification rate on the MSRC dataset from 67.1% to
74.4% for the spatially uncoupled ?IND? model and from 80.7% to 84.9% for the ?CRF?? spatial
model.
The idea of aggregation can be generalized to scales smaller than the complete image. We experimented with dividing the image into c ? c grids for a range of values of c. In each cell of the grid
we compute a separate histogram over the visual words, and for each patch in the cell we include
an energy term based on this histogram in the same way as for the image-wide histogram in Eq. (1).
Figure 2 shows how the performance of the individual patch classifier depends on the use of aggregate features. From the dotted curve in the figure we see that although using larger cells to aggregate
features is generally more informative, even fine 10 ? 10 subdivisions (containing only 6?12 patches
per cell) provide a significant performance increase. Furthermore, including aggregates computed
at several different scales does help, but the performance increment is small compared to the gain
obtained with just image-wide aggregates. Therefore we included only image-wide aggregates in
the subsequent experiments.
Benefits of including spatial coupling. The second main conclusion from Table 1 is that including
spatial couplings (pairwise CRF potentials) helps, respectively increasing the accuracy by 10.5% for
?loc+glo? and by 13.6% for ?loc only? for ?CRF?? relative to ?IND?. The improvement is particularly
noticeable for rare classes when global aggregate features are not included: in this case the single
node potentials are less informative and frequent classes tend to be unduly favored due to their large
a priori probability.
When the image-wide aggregate features are included (?loc+glo?), the simplest pairwise potential ?
the ?CRF?? Potts model ? works better than the more general models ?CRF?? and ?CRF? ?, while
if only the local features are included (?loc only?), the class-dependent pairwise potential ?CRF??
works best. The performance increment from global features is smallest for ?CRF??, the model
that also includes local contextual information. The overall influence of the local label transition
preferences expressed in ?CRF?? appears to be similar to that of the global contextual information
provided by image-wide aggregate features.
Benefits of training by marginalizing partial labelings. Our third main conclusion from Table 1
is that our marginalization based training method for handling missing labels is superior to the
common heuristic of deleting any unlabeled patches. Learning a ?CRF? loc+glo? model by removing
all unlabeled patches (?del unlabeled? in the table) leads to an estimate ? ? 11.5, whereas the
maximum likelihood estimate of (4) leads to ? ? 1.9. In particular, with ?delete unlabeled? training
the accuracy of the model drops significantly for the classes plane and bike, both of which have
a relatively small area relative to their boundaries and thus many partially labeled patches. It is
interesting to note that even though ? has been severely over-estimated in the ?delete unlabeled?
model, the CRF still improves over the individual patch classification obtained with ?IND loc+glo?
for most classes, albeit not for bike and only marginally for plane.
Recognition as function of the amount of labeling. We now consider how the performance drops
as the fraction of labeled pixels decreases. We applied a morphological erosion operator to the manual annotations, where we varied the size of the disk-shaped structuring element from 0, 5, . . . , 50.
5
85
Accuracy
disc 0
75
disc 10
disc 20
80
70
65
60
CRF? loc+glo
IND loc+glo
20
30
40
50
60
70
Percentage of pixels labeled
Figure 3: Recognition performance when learning from increasingly eroded label images (left).
Example image with its original annotation, and erosions thereof with disk of size 10 and 20 (right).
In this way we obtain a series of annotations that resemble increasingly sloppy manual annotations,
see Figure 3. The figure also shows the recognition performance of ?CRF? loc+glo? and ?IND
loc+glo? as a function of the fraction of labeled pixels. In addition to its superior performance when
trained on well labeled images, the CRF maintains its performance better as the labelling becomes
sparser. Note that ?CRF? loc+glo? learned from label images eroded with a disc of radius 30 (only
28% of pixels labeled) still outperforms ?IND loc+glo? learned from the original labeling (71% of
pixels labeled). Also, the CRF actually performs better with 5 pixels of erosion than with the original labeling, presumably because ambiguities related to training patches with mixed pixel labels are
reduced.
Comparison with related work. Table 1 also compares our recognition results on the MSRC
dataset with those reported in [15, 10]. Our CRF model clearly outperforms the approach of [15],
which uses aggregate features of an optimized scale but lacks spatial coupling in a random field,
giving a performance very similar to that of our ?IND loc+glo? model. Our CRF model also performs
slightly better than our generative approach of [10], which is based on the same feature set but differs
in its implementation of image-wide contextual information ([10] also used a 90%?10% training-test
partition, not 50%-50% as here).
Using the Sowerby dataset and a subset of the Corel dataset we also compare our model with two
CRF models that operate at pixel-level. The Sowerby dataset consists of 104 images of 96 ? 64
pixels of urban and rural scenes labeled with 7 different classes: sky, vegetation, road marking, road
surface, building, street objects and cars. The subset of the Corel dataset contains 100 images of
180 ? 120 pixels of natural scenes, also labeled with 7 classes: rhino/hippo, polar bear, water, snow,
vegetation, ground, and sky. Here we used 10 ? 10 pixel patches, with a spacing of respectively 2
and 5 pixels for the Sowerby and Corel datasets. The other parameters were kept as before. Table 2
compares the recognition accuracies averaged over pixels for our CRF and independent patch models
to the results reported on these datasets for TextonBoost [7] and the multi-scale CRF model of [5].
In this table ?IND? stands for results obtained when only the single node potentials are used in the
respective models, disregarding the spatial random field couplings. The total training time and test
time per image are listed for the full CRF models. The results show that on these datasets our model
performs comparably to pixel-level approaches while being much faster to train and test since it
operates at patch-level and uses standard features as opposed to the boosting procedure of [7].
5
Conclusion
We presented several image-patch-level CRF models for semantic image labeling that incorporate
both local patch-level observations and more global contextual features based on aggregates of observations at several scales. We showed that partially labeled training images could be handled by
maximizing the total likelihood of the image segmentations that comply with the partial labeling,
using Loopy BP and Bethe free-energy approximations for the calculations. This allowed us to learn
effective CRF models from images where only a small fraction of the pixels were labeled and class
transitions were not observed. Experiments on the MSRC dataset showed that including image6
Sowerby
Accuracy
Speed
IND
CRF
train
test
TextonBoost [7]
He et al. [5] CRF
CRF? loc+glo
85.6%
82.4%
86.0%
88.6%
89.5%
87.4%
5h
Gibbs
20min
10s
Gibbs
5s
Corel
Accuracy
Speed
IND
CRF
train
test
68.4%
66.9%
66.9%
74.6%
80.0%
74.6 %
12h
Gibbs
15min
30s
Gibbs
3s
Table 2: Recognition accuracy and speeds on the Corel and Sowerby dataset.
wide aggregate features is very helpful, while including additional aggregates at finer scales gives
relatively little further improvement. Comparative experiments showed that our patch-level CRFs
have comparable performance to state-of-the-art pixel-level models while being much more efficient
because the number of patches is much smaller than the number of pixels.
References
[1] P. Jansen, W. van der Mark, W. van den Heuvel, and F. Groen. Colour based off-road environment and
terrain type classification. In Proceedings of the IEEE Conference on Intelligent Transportation Systems,
pages 216?221, 2005.
[2] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of
images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):712?741, 1984.
[3] C. Rother, V. Kolmogorov, and A. Blake. GrabCut: interactive foreground extraction using iterated graph
cuts. ACM Transactions on Graphics, 23(3):309?314, 2004.
[4] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning,
volume 18, pages 282?289, 2001.
[5] X. He, R. Zemel, and M. Carreira-Perpi?na? n. Multiscale conditional random fields for image labelling. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 695?702, 2004.
[6] S. Kumar and M. Hebert. A hierarchical field framework for unified context-based classification. In
Proceedings of the IEEE International Conference on Computer Vision, pages 1284?1291, 2005.
[7] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost: joint appearance, shape and context
modeling for multi-class object recognition and segmentation. In Proceedings of the European Conference
on Computer Vision, pages 1?15, 2006.
[8] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In Advances in
Neural Information Processing Systems, volume 17, pages 1097?1104, 2005.
[9] P. Carbonetto, G. Dork?o, C. Schmid, H. K?uck, and N. de Freitas. A semi-supervised learning approach to
object recognition with spatial integration of local features and segmentation cues. In Toward CategoryLevel Object Recognition, pages 277?300, 2006.
[10] J. Verbeek and B. Triggs. Region classification with Markov field aspect models. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[11] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
[12] J. van de Weijer and C. Schmid. Coloring local feature extraction. In Proceedings of the European
Conference on Computer Vision, pages 334?348, 2006.
[13] The 2005 PASCAL visual object classes challenge. In F. d?Alche-Buc, I. Dagan, and J. Quinonero,
editors, Machine Learning Challenges: Evaluating Predictive Uncertainty, Visual Object Classification,
and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop. Springer,
2006.
[14] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Technical
Report TR-2001-22, Mitsubishi Electric Research Laboratories, 2001.
[15] F. Schroff, A. Criminisi, and A. Zisserman. Single-histogram class models for image segmentation. In
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, 2006.
7
MSRC
CRF? loc+glo
Labeling
Sowerby
CRF? loc+glo
Labeling
Corel
CRF? loc+glo
Labeling
Figure 4: Samples from the MSRC, Sowerby, and Corel datasets with segmentation and labeling.
8
| 3268 |@word norm:1 triggs:2 plsa:1 disk:2 seek:1 mitsubishi:1 rgb:2 textonboost:3 tr:1 solid:2 reduction:1 initial:1 generatively:1 contains:2 loc:28 series:1 denoting:1 outperforms:2 existing:1 freitas:1 com:1 contextual:4 realistic:1 partition:8 informative:2 subsequent:1 shape:1 designed:1 drop:2 grass:2 alone:1 generative:2 leaf:1 half:1 eroded:2 intelligence:1 cue:1 plane:4 mccallum:1 probablity:1 provides:1 boosting:1 node:12 location:1 preference:1 become:1 consists:2 introduce:2 pairwise:6 hippo:1 multi:2 freeman:1 automatically:1 little:1 increasing:1 becomes:1 provided:1 xx:1 estimating:1 suffice:1 bike:4 unified:1 sky:4 every:1 interactive:1 decouples:1 classifier:3 k2:2 control:1 yn:2 overlaying:1 overestimate:1 maximise:1 before:1 local:16 segmenting:1 severely:1 troublesome:1 inria:1 twice:1 k:3 ease:1 range:2 averaged:1 testing:1 practice:3 differs:1 procedure:2 area:1 significantly:2 word:9 road:3 unlabeled:23 operator:1 context:6 applying:1 influence:1 bill:1 deterministic:1 equivalent:1 center:3 crfs:3 maximizing:2 map:1 missing:1 rural:1 transportation:1 focused:1 rectangular:1 underrepresented:1 m2:1 handle:1 autonomous:1 increment:2 us:2 element:2 recognition:15 approximated:1 particularly:1 cut:1 coarser:1 labeled:23 geman:2 observed:5 capture:1 calculate:1 region:5 connected:3 keyword:1 morphological:1 decrease:1 vanishes:1 fbethe:2 environment:1 trained:2 depend:1 alche:1 predictive:1 purely:1 distinctive:1 completely:5 joint:1 kolmogorov:1 train:4 distinct:1 forced:1 effective:3 describe:1 kp:2 zemel:1 labeling:20 aggregate:20 jean:1 encoded:2 whose:1 larger:1 heuristic:1 otherwise:1 ability:1 favor:1 think:1 jointly:1 sequence:1 product:1 frequent:1 neighboring:3 kh:2 constituent:1 convergence:1 regularity:1 darrell:1 produce:1 categorization:1 comparative:1 leave:1 object:9 help:4 coupling:10 develop:1 ij:3 nearest:3 noticeable:1 eq:3 dividing:1 resemble:1 indicate:1 snow:1 radius:1 correct:1 owing:1 filter:1 stochastic:1 criminisi:2 require:1 carbonetto:1 assign:4 generalization:1 secondly:1 ground:2 blake:1 deciding:1 exp:7 presumably:1 predict:2 dictionary:2 early:1 smallest:1 estimation:2 polar:1 schroff:2 label:30 wl:4 tool:1 hope:1 clearly:1 gaussian:1 super:1 structuring:1 improvement:2 potts:1 modelling:1 likelihood:14 greatly:1 contrast:3 helpful:1 inference:1 dependent:1 typically:1 hidden:1 rhino:1 france:1 labelings:10 pixel:39 overall:1 classification:9 among:2 ir3:1 denoted:7 favored:1 priori:1 pascal:2 jansen:1 art:2 spatial:9 integration:1 weijer:1 marginal:1 field:16 once:2 shaped:1 extraction:2 foreground:1 report:1 intelligent:1 neighbour:1 individual:3 microsoft:2 message:1 navigation:1 accurate:1 edge:1 partial:5 necessary:1 respective:1 tree:3 incomplete:1 divide:1 initialized:1 circle:1 isolated:1 delete:2 modeling:2 restoration:1 loopy:4 subset:2 rare:1 uniform:1 recognizing:1 successful:1 dij:3 graphic:2 reported:2 international:3 overestimated:1 probabilistic:1 off:1 na:1 again:1 ambiguity:2 containing:1 opposed:1 worse:1 derivative:2 return:1 account:2 potential:13 de:3 singleton:1 summarized:1 includes:1 coefficient:1 explicitly:1 depends:2 vehicle:1 multiplicative:1 lowe:1 doing:1 red:1 bayes:1 aggregation:3 maintains:1 annotation:4 contribution:2 square:1 yiw:2 accuracy:10 descriptor:3 maximized:1 correspond:1 modelled:1 bayesian:1 iterated:1 disc:5 comparably:1 marginally:1 monitoring:1 finer:1 cation:1 classified:1 quattoni:1 manual:2 against:2 energy:5 frequency:3 thereof:1 associated:1 monitored:1 gain:1 dataset:11 popular:2 color:3 excision:1 improves:2 car:3 segmentation:17 organized:1 actually:2 back:1 coloring:1 appears:1 supervised:1 zisserman:1 wei:1 entailment:1 evaluated:1 though:1 furthermore:1 just:1 correlation:1 working:1 multiscale:1 propagation:3 del:2 lack:1 building:3 contain:3 normalized:3 true:1 assigned:1 spatially:2 symmetric:1 laboratory:1 semantic:3 ind:14 adjacent:1 erosion:3 covering:1 exacerbates:1 generalized:1 crf:54 complete:3 performs:3 image:63 ranging:1 variational:1 wise:2 recently:1 superior:2 common:1 corel:7 heuvel:1 volume:2 belong:1 interpretation:3 vegetation:2 he:2 marginals:7 significant:1 cambridge:2 gibbs:5 automatic:1 grid:7 trivially:2 similarly:2 msrc:8 europe:1 similarity:1 encountering:1 surface:1 glo:21 add:1 recent:1 showed:3 optimizing:1 belongs:1 discard:1 binary:2 yi:5 der:1 seen:1 analyzes:1 unrestricted:1 montbonnot:1 additional:1 aggregated:1 maximize:1 grabcut:1 dashed:2 semi:2 full:3 keypoints:1 reduces:2 technical:1 faster:1 calculation:2 af:1 retrieval:1 post:1 coded:1 sowerby:7 verbeek:2 mrf:1 vision:8 metric:1 histogram:8 represent:2 texton:1 cell:7 lbp:3 whereas:1 fine:1 spacing:2 interval:1 addition:1 laboratoire:1 winn:1 modality:1 rest:2 operate:1 ascent:1 tend:1 undirected:1 incorporates:2 lafferty:1 near:1 counting:1 shotton:1 enough:1 variety:2 xj:7 isolation:1 zi:1 marginalization:1 associating:1 cow:2 idea:1 avenue:1 whether:1 motivated:1 handled:1 colour:1 cause:1 nine:1 useful:2 generally:1 listed:1 amount:1 hue:1 processed:1 category:2 simplest:1 reduced:1 http:1 percentage:1 zj:2 dotted:2 estimated:1 per:3 write:1 urban:1 kept:1 graph:3 relaxation:1 fraction:3 year:1 sum:2 run:2 uncertainty:1 reporting:1 place:1 patch:65 comparable:1 bit:2 capturing:1 nontrivial:1 strength:2 constraint:1 bp:2 scene:8 aspect:1 speed:3 argument:2 min:2 kumar:1 performing:1 dork:1 relatively:2 marking:1 combination:1 conjugate:1 wxi:2 remain:1 smaller:2 increasingly:2 slightly:1 den:1 restricted:2 invariant:1 hh:1 needed:1 end:1 available:2 yedidia:1 apply:1 hierarchical:1 appropriate:1 alternative:1 original:3 assumes:1 top:1 clustering:1 denotes:1 include:5 graphical:2 somewhere:1 kuntzmann:1 concatenated:1 restrictive:1 giving:1 approximating:1 objective:1 costly:1 traditional:2 gradient:8 separate:1 street:1 quinonero:1 collected:1 water:1 toward:1 rother:2 length:1 index:2 modeled:1 equivalently:1 difficult:1 implementation:1 unknown:1 allowing:1 observation:8 datasets:8 markov:2 descent:1 situation:1 incorporated:1 looking:1 varied:1 jakob:1 arbitrary:1 pair:4 optimized:1 omits:1 learned:8 uncoupled:1 unduly:1 textual:1 usually:1 pattern:3 challenge:3 including:9 reliable:1 belief:3 deleting:1 afs:4 natural:1 indicator:1 naive:1 coupled:2 schmid:2 comply:1 understanding:1 l2:1 marginalizing:2 relative:3 fully:4 bear:1 mixed:2 interesting:1 proven:1 sloppy:1 undamped:1 consistent:2 editor:1 systematically:1 free:4 hebert:1 bias:1 allow:1 neighbor:5 fall:1 wide:10 face:2 dagan:1 benefit:3 van:3 overcome:1 boundary:2 xn:3 transition:5 curve:4 dataset1:1 stand:1 evaluating:1 collection:2 adaptive:1 kzi:1 transaction:2 approximate:3 ignore:1 buc:1 global:13 conclude:1 consuming:1 xi:11 terrain:1 table:9 bethe:5 learn:2 improving:1 quantize:2 european:2 electric:1 main:3 arrow:1 whole:3 noise:1 arise:1 allowed:3 position:3 pereira:1 third:1 hw:2 admissible:1 down:1 removing:1 perpi:1 sift:1 explored:1 experimented:1 disregarding:1 incorporating:1 intractable:2 workshop:1 albeit:1 texture:2 groen:1 labelling:2 sparser:1 appearance:3 visual:11 expressed:1 partially:7 springer:1 truth:1 acm:1 conditional:11 goal:1 content:1 included:4 carreira:1 operates:1 total:2 uck:1 experimental:3 subdivision:1 formally:1 mark:1 collins:1 indian:1 incorporate:5 handling:1 |
2,501 | 3,269 | DIFFRAC : a discriminative and flexible
framework for clustering
Francis R. Bach
INRIA - Willow Project
?
Ecole
Normale Sup?erieure
45, rue d?Ulm, 75230 Paris, France
francis.bach@mines.org
Za??d Harchaoui
LTCI, TELECOM ParisTech and CNRS
46, rue Barrault
75634 Paris cedex 13, France
zaid.harchaoui@enst.fr
Abstract
We present a novel linear clustering framework (D IFFRAC) which relies on a linear discriminative cost function and a convex relaxation of a combinatorial optimization problem. The large convex optimization problem is solved through a
sequence of lower dimensional singular value decompositions. This framework
has several attractive properties: (1) although apparently similar to K-means, it
exhibits superior clustering performance than K-means, in particular in terms of
robustness to noise. (2) It can be readily extended to non linear clustering if the
discriminative cost function is based on positive definite kernels, and can then be
seen as an alternative to spectral clustering. (3) Prior information on the partition
is easily incorporated, leading to state-of-the-art performance for semi-supervised
learning, for clustering or classification. We present empirical evaluations of our
algorithms on synthetic and real medium-scale datasets.
1 Introduction
Many clustering frameworks have already been proposed, with numerous applications in machine
learning, exploratory data analysis, computer vision and speech processing. However, these unsupervised learning techniques have not reached the level of sophistication of supervised learning
techniques, that is, for all methods, there are still a significant number of explicit or implicit parameters to tune for successful clustering, most generally, the number of clusters and the metric or the
similarity structure over the space of configurations.
In this paper, we present a discriminative and flexible framework for clustering (D IFFRAC), which
is aimed at alleviating some of those practical annoyances. Our framework is based on a recent
set of works [1, 2] that have used the support vector machine (SVM) cost function used for linear
classification as a clustering criterion, with the intuitive goal of looking for clusters which are most
linearly separable. This line of work has led to promising results; however, the large convex optimization problems that have to be solved prevent application to datasets larger than few hundreds
data points.1 In this paper, we consider the maximum value of the regularized linear regression on
indicator matrices. By choosing a square loss (instead of the hinge loss), we obtain a simple cost
function which can be simply expressed in closed form and is amenable to specific efficient convex
optimization algorithms, that can deal with large datasets of size 10,000 to 50,000 data points. Our
cost function turns out to be a linear function of the ?equivalence matrix? M , which is a square
{0, 1}-matrix indexed by the data points, with value one for all pairs of data points that belong to
the same clusters, and zero otherwise. In order to minimize this cost function with respect to M , we
follow [1] and [2] by using convex outer approximations of the set of equivalence matrices, with a
novel constraint on the minimum number of elements per cluster, which is based on the eigenvalues
of M , and essential to the success of our approach.
1
Recent work [3] has looked at more efficient formulations.
In Section 2, we present a derivation of our cost function and of the convex relaxations. In Section 3,
we show how the convex relaxed problem can be solved efficiently through a sequence of lower
dimensional singular value decompositions, while in Section 4, we show how a priori knowledge
can be incorporated into our framework. Finally, in Section 5, we present simulations comparing
our new set of algorithms to other competing approaches.
2 Discriminative clustering framework
In this section, we first assume that we are given n points x1 , . . . , xn in Rd , represented in a matrix
X ? Rn?d . We represent the various partitions of {1, . . . , n} into k > 1 clusters by indicator
matrices y ? {0, 1}n?k such that y1k = 1n , where 1k and 1n denote the constant vectors of all
ones, of dimensions k and n. We let denote Ik the set of k-class indicator matrices.
2.1 Discriminative clustering cost
Given y, we consider the regularized linear regression problem of y given X, which takes the form:
min
w?Rd?k , b?R1?k
1
n ky
? Xw ? 1n bk2F + ? tr w? w,
(1)
where the Frobenius norm is defined for any vector or rectangular matrix as kAk2F = trAA? =
trA? A. Denoting f (x) = w? x + b ? Rk , this corresponds to a multi-label classification problem
with square loss functions [4, 5]. The main advantage of this cost function is the possibility of (a)
minimizing the regularized cost in closed form and (b) including a bias term by simply centering
the data; namely, the global optimum is attained at w? = (X ? ?n X + n?In )?1 X ? ?n y and b? =
1 ?
1
?
?
n 1n (y ? Xw ), where ?n = In ? n 1n 1n is the usual centering projection matrix. The optimal
value is then equal to
J(y, X, ?) = tr yy ? A(X, ?),
(2)
where the n ? n matrix A(X, ?) is defined as:
A(X, ?) = n1 ?n (In ?X(X ??n X + n?I)?1 X ? )?n .
(3)
The matrix A(X, ?) is positive semi-definite, i.e., for all u ? Rn , u? A(X, ?)u > 0, and 1n is a
singular vector of A(X, ?), i.e., A(X, ?)1n = 0.
Following [1] and [2], we are thus looking for a k-class indicator matrix y such that tr yy ? A(X, ?)
is minimal, i.e., for a partition such that the clusters are most linearly separated, where the separability of clusters is measured through the minimum of the discriminative cost with respect to all
linear classifiers. This combinatorial optimization is NP-hard in general [6], but efficient convex
relaxations may be obtained, as presented in the next section.
2.2 Indicator and equivalence matrices
The cost function defined in Eq. (2) only involves the matrix M = yy ? ? Rn?n . We let denote Ek
the set of ?k-class equivalence matrices?, i.e., the set of matrices M such that there exists a k-class
indicator matrix y with M = yy ? .
There are many outer convex approximations of the discrete sets Ek , based on different properties of
matrices in Ek , that were used in different contexts, such as maximum cut problems [6] or correlation
clustering [7]. We have the following usual properties of equivalence matrices (independent of k):
if M ? Ek , then (a) M is positive semidefinite (denoted as M < 0), (b) M has nonnegative values
(denoted as M > 0) , and (c) the diagonal of M is equal to 1n (denoted as diag(M ) = 1n ).
Moreover, if M corresponds to at most k clusters, we have M < k1 1n 1?
n , which is a consequence to
the convex outer approximation of [6] for the maximum k-cut problem. We thus use the following
convex outer approximation:
Ck = {M ? Rn?n , M = M ? , diag(M ) = 1n , M > 0, M < k1 1n 1?
n } ? Ek .
Note that when k = 2, the constraints M > 0 (pointwise nonnegativity) is implied by the other
constraints.
2.3 Minimum cluster sizes
Given the discriminative nature of our cost function (and in particular that A(X, ?)1n = 0), the
minimum value 0 is always obtained with M = 1n 1?
n , a matrix of rank one, equivalent to a single
cluster. Given the number of desired clusters, we thus need to add some prior knowledge regarding
the size of those clusters. Following [1], we impose a minimum size ?0 for each cluster, through
row sums and eigenvalues:
Row sums If M ? Ek , then M 1n > ?0 1n and M 1n 6 (n ? (k ? 1)?0 )1n (the cluster must be
smaller than (n ? (k ? 1)?0 ) if they are all larger than ?0 )?this is the same constraint as in [1].
Eigenvalues When M ? Ek , the sizes of the clusters are exactly the k largestPeigenvalues of M .
n
Thus, for a matrix in Ek , the minimum cluster size constraint is equivalent to i=1 1?i (M)>?0 >
k,
Pnwhere ?1 (M ), . . . , ?n (M ) are the n eigenvalues of M . Functions of the form ?(M ) =
i=1 ?(?i (M )) are referred to as spectral functions and are particularly interesting in machine
learning and optimization, since ? inherits from ? many of its properties, such as differentiability
and convexity [8]. The previous constraint can be seen as ?(M ) > k, with ?(?) = 1?>?0 , which
is not concave and thus does not lead to a convex constraint. In this paper we propose to use the
concave upper envelope of this function, namely ??0 (?) = min{?/?0 , 1}, thus leading to a novel
additional constraint.
Our final convex relaxation is thus of minimizing trA(X, ?)M with respect to M ? Ck and
such that ??0 (M ) > k, M 1n > ?0 1n and M 1n 6 (n ? (k ? 1)?0 )1n , where ??0 (M ) =
P
n
i=1 min{?i (M )/?0 , 1}. The clustering results are empirically robust to the value of ?0 . In all
our simulations we use ?0 = ?n/2k?.
2.4 Comparison with K-means
Our method bears some resemblance with the usual K-means algorithm. Indeed, in the unregularized case (? = 0), we aim to minimize
tr ?n (In ? X(X ? ?n X)?1 X ? )?n yy ? .
Results from [9] show that K-means aims at minimizing the following criterion with respect to y:
min kX ? y?k2F = tr(In ? y(y ? y)?1 y ? )(?n X)(?n X)?.
??Rk?d
The main differences between the two cost functions are that (1) we require an additional parameter,
namely the minimum number of elements per cluster and (2) our cost function normalizes the data,
while the K-means distortion measure normalizes the labels. This apparently little difference has
a significant impact on the performance, as our method is invariant by affine scaling of the data,
while K-means is only invariant by translation, isometries and isotropic scaling, and is very much
dependent on how the data are presented (in particular the marginal scaling of the variables). In
Figure 1, we compare the two algorithms on a simple synthetic task with noisy dimensions, showing
that ours is more robust to noisy features. Note that using a discriminative criterion based on the
square loss may lead to the masking problem [4], which can be dealt with in the usual way by using
second-order polynomials or, equivalently, a polynomial kernel.
2.5 Kernels
The matrix A(X, ?) in Eq. (3) can be expressed only in terms of the Gram matrix K = XX ?.
Indeed, using the matrix inversion lemma, we get:
e + n?In )?1 ?n ,
A(K, ?) = ??n (K
(4)
e = ?n K?n is the ?centered Gram matrix? of the points X. We can thus apply our
where K
framework with any positive definite kernel [5].
2.6 Additional relaxations
Our convex optimization problem can be further relaxed. An interesting relaxation is obtained by
(1) relaxing the constraints M < k1 1n 1?
n into M < 0, (2) relaxing diag(M ) = 1n into trM = n,
clustering error
1
0.8
0.6
0.4
K?means
diffrac
0.2
0
?0.2
0
10
20
30
noise dimension
Figure 1: Comparison with K-means, on a two-dimensional dataset composed of two linearly separable bumps (100 data points, plotted in the left panel), with additional random independent noise
dimensions (with normal distributions with same marginal variances as the 2D data). The clustering
performance is plotted against the number of irrelevant dimensions, for regular K-means and our
D IFFRAC approach (right panel, averaged over 50 replications with the standard deviation in dotted
lines) . The clustering performance is measured by a metric between partitions defined in Section 5,
which is always between 0 and 1.
and (3) removing the constraint M > 0 and the constraints on the P
row sums. A short calculation
shows that this relaxation leads to an eigenvalue problem: let A = ni=1 ai ui u?
i be an eigenvalue
decomposition of A, where a1 6 ? ? ? 6 an are the sorted eigenvalues. The minimal value of the
Pj
?
relaxed convex optimization problem is attained at M = i=1 ui u?
i + (n ? ?0 j)uj+1 uj+1 , with
j = ?n/?0 ?. This additional relaxation into an eigenvalue problem is the basis of our efficient
optimization algorithm in Section 3.
e + n?In )?1 ?n are the
In the kernel formulation, since the smallest eigenvectors of A = n1 ?n (K
e the relaxed problem is thus equivalent to kernel principal
same as the largest eigenvectors of K,
component analysis [10, 5] in the kernel setting, and in the linear setting to regular PCA (followed by
our rounding procedure presented in Section 3.3). In the linear setting, since PCA has no clustering
effects in general2 , it is clear that the constraints that were removed are essential to the clustering
performance. In the kernel setting, experiments have shown that the most important constraint to
keep in order to achieve the best embedding and clustering is the constraint diag(M ) = 1n .
3 Optimization
Since ??0 (?) = 2?1 0 (? + ?0 ? |? ? ?0 |), and the sum of singular values can be represented as a
semidefinite program (SDP), our problem is an SDP. It can thus be solved to any given accuracy in
polynomial time by general purpose interior-point methods [12]. However, the number of variables
is O(n2 ) and thus the complexity of general purpose algorithms will be at least O(n7 ); this remains
much too slow for medium scale problems, where the number of data points is between 1,000 and
10,000. We now present an efficient approximate method that uses the specificity of the problem to
reduce the computational load.
3.1 Optimization by partial dualization
We saw earlier that by relaxing some of the constraints, we get back an eigenvalue problem. Eigenvalue decompositions are among the most important tools in numerical algebra and algorithms and
codes are heavily optimized for these, and it is thus advantageous to rely on a sequence of eigenvalue
decompositions for large scale algorithms.
We can dualize some constraints while keeping others; this leads to the following proposition:
2
Recent results show however that it does have an effect when clusters are spherical Gaussians [11].
Proposition 1 The solution of the convex optimization problem defined in Section 2.3 can be obtained my maximizing F (?) = minM<0,trM=n,??0 (M)>k trB(?)M ? b(?) with respect to ?, where
1
1
1 ?5 ?5?
B(?) = A + Diag(?1 ) ? (?2 ? ?3 )1? ? 1(?2 ? ?3 )? ? ?4 +
2
2
2 ?6
b(?) = ?1? 1 ? (n ? (k ? 1)?0 )?2? 1 + ?0 ?3? 1 + k?6 /2 + ?5? 1,
n
and ?1 ? Rn , ?2 ? Rn+ , ?3 ? Rn+ , ?4 ? Rn?n
+ ,?5 ? R , ?6 ? R+ .
The variables ?1 , ?2 , ?3 , ?4 , (?5 , ?6 ) correspond to the respective dualizations of the constraints
diag(M ) = 1n , M 1n 6 (n ? (k ? 1)?0 )1n , M 1n > ?0 1n , M > 0, and M <
1n 1?
n
k .
The function J(B) = minM<0,trM=n,??0 (M)>k trBM is a spectral convex function and may be
computed in closed form through an eigenvalue decomposition. Moreover, a subgradient may be
easily computed, readily leading to a numerically efficient subgradient method in fewer dimensions
than n2 . Indeed, if we subsample the pointwise positivity constraint N > 0 (so that ?4 has only a
size smaller than n1/2 ? n1/2 ), then the set of dual variables ? we are trying to maximize has linear
size in n (instead of the primal variable M being quadratic in n).
More refined optimization schemes, based on smoothing of the spectral function J(B) by
minM<0,trM=n,??0 (M)>k [trBM + ?trM 2 ] are also used to speed up convergence (steepest descent
of a smoothed function is generally faster than subgradient iterations) [13].
3.2 Computational complexity
The running time complexity can be splitted into initialization procedures and per iteration complexity. The per iteration complexity depends directly on the cost of our eigenvalue problems, which
themselves are linear in the matrix-vector operation with the matrix A (we only require a fixed small
number of eigenvalues). In all situations, we manage to keep a linear complexity in the number n
of data points. Note, however, that the number of descent iterations cannot be bounded a priori; in
simulations we limit the number of those iterations to 200.
For linear kernels with dimension d, the complexity of initialization is O(d2 n), while the complexity
of each iteration is proportional to the cost of performing a matrix-vector operation with A, that is,
O(dn). For general kernels, the complexity of initialization is O(n3 ), while the complexity of each
iteration is O(n2 ). However, using an incomplete Cholesky decomposition [5] makes all costs linear
in n.
3.3 Rounding
After the convex optimization, we obtain a low-rank matrix M ? Ck which is pointwise nonnegative
with unit diagonal, of the form U U ? where U ? Rn?m . We need to project it back to the discrete
Ek . We have explored several possibilities, all with similar results. We propose the following procedure: we first project M back to the set of matrices of rank k and unit diagonal, by computing
an eigendecomposition, rescaling the first k eigenvectors to unit norms and then perform K-means,
which is equivalent to performing the spectral clustering algorithm of [14] on the matrix M .
4 Semi-supervised learning
Working with equivalence matrices M allows to easily include prior knowledge on the clusters [2, 15, 16], namely, ?must-link? constraints (also referred to a positive constraints) for which we
constrain an element of M to be one, and ?must-not-link? constraints (also referred to as negative
constraints), for which we constrain an element of M to be zero. Those two constraints are linear in
M and can thus easily be included in our convex formulation.
We assume throughout this section that we have a set of ?must-link? pairs P+ and a set of ?mustnot-link? pairs P? . Moreover, we assume that the set of positive constraints is closed, i.e., that if
there is a path of positive constraints between two data points, then these two data points are already
forming a pair in P+ . If the set of positive pairs does not satisfy this assumption, a larger set of pairs
can be obtained by transitive closure.
clustering error
20 % x n
40 % x n
1 K?means
diffrac
0.5
1
0.5
0
0
0
20
40
noise dimension
0
20
40
noise dimension
Figure 2: Comparison with K-means in the semi-supervised setting, with data taken from Figure 1:
clustering performance (averaged over 50 replications, with standard deviations in dotted) vs. number of irrelevant dimensions, with 20% ? n and 40% ? n random matching pairs used for semisupervision.
Positive constraints Given our closure assumption on P+ , we get a partition of {1, . . . , n} into
p ?chunks? of size greater or equal to 1. The singletons in this partition correspond to data points
that are not involved in any positive constraints, while other subsets corresponds to chunks of data
points that must occur together in the final partition. We let Cj , j = 1, . . . , p denote those groups,
and let P denote the n ? p {0, 1}-matrix defined such that each column (indexed by j) is equal to
one for rows in Cj and zero otherwise. Forcing those groups is equivalent to considering M of the
form M = P MP P ? , where MP is an equivalence matrix of size p. Note that the positive constraint
Mij = 1 is in fact turned into the equality of columns (and thus rows by symmetry) i and j of M ,
which is equivalent when M ? Ek , but much stronger for M ? Ck .
In our linear clustering framework, this is in fact equivalent to (a) replacing each chunk by its
mean, (b) adding a weight equal to the number of elements in the group into the discriminative cost
function and (c) modifying the regularization matrix to take into account the inner variance within
each chunk. Positive constraints can be similarly included into K-means, to form a reduced weighted
K-means problem, which is simpler than other approaches to deal with positive constraints [17].
In Figure 2, we compare constrained K-means and the D IFFRAC framework under the same setting
as in Figure 1, with different numbers of randomly selected positive constraints.
Negative constraints After the chunks corresponding to positive constraints have been collapsed
to one point, we extend the set of negative constraints to those collapsed points (if the constraints
were originally consistent, the negative constraints can be uniquelyP
extended). In our optimization
framework, we simply add a penalty function of the form ?|P1? | (i,j)?P? Mij2 . The K-means
rounding procedure also has to be constrained, e.g., using the procedure of [17].
5 Simulations
In this section, we apply the D IFFRAC framework to various clustering problems and situations. In all our simulations, we use the following distance between partitions B = B1 ?
? ? ? ? Bk and B ? = B1? ? ? ? ? ? Bk? ? into k and k ? disjoints subsets of {1, . . . , n}: d(B, B ? ) =
1/2
P
Card(Bi ?Bi?? )2
k + k ? ? 2 i,i? Card(Bi )Card(B
. d(B, B ? ) defines a distance over the set of partitions [9]
? )
i?
which is always between 0 and (k + k ? ? 2)1/2 . When comparing partitions, we use the squared
?
distance 12 d(B, B ? )2 , which is always between 0 and k+k
2 ? 1 (and between 0 and k ? 1, if the two
partitions have the same number of clusters).
5.1 Clustering classification datasets
We looked at the Isolet dataset (26 classes, 5,200 data points) from the UCI repository and the
MNIST datasets of handwritten digits (10 classes, 5,000 data points). For each of those datasets,
we compare the performances of K-means, RCA [18] and D IFFRAC, for linear and Gaussian kernels (referred to as ?rbf?), for fixed value of the regularization parameter, with different levels of
supervision. Results are presented in Table 1: on unsupervised problems, K-means and D IFFRAC
Dataset
Mnist-linear 0%
Mnist-linear 20%
Mnist-linear 40%
Mnist-RBF 0%
Mnist-RBF 20%
Mnist-RBF 40%
Isolet-linear 0%
Isolet-linear 20%
Isolet-linear 40%
Isolet-RBF 0%
Isolet-RBF 20%
Isolet-RBF 40%
K-means D IFFRAC
5.6 ? 0.1 6.0 ? 0.4
4.5 ? 0.3 3.6 ? 0.3
2.9 ? 0.3 2.2 ? 0.2
5.6 ? 0.2 4.9 ? 0.2
4.6 ? 0.0 1.8 ? 0.4
4.9 ? 0.0 0.9 ? 0.1
12.1 ? 0.6 12.3 ? 0.3
10.5 ? 0.2 7.8 ? 0.8
9.2 ? 0.5 3.7 ? 0.2
11.4 ? 0.4 11.0 ? 0.3
10.6 ? 0.0 7.5 ? 0.5
10.0 ? 0.0 3.7 ? 1.0
RCA
3.0 ? 0.2
1.8 ? 0.4
4.1 ? 0.2
2.9 ? 0.1
9.5 ? 0.4
7.0 ? 0.4
7.8 ? 0.5
6.9 ? 0.6
Table 1: Comparison of K-means, RCA and linear D IFFRAC, using the clustering metric defined in
Section 5 (averaged over 10 replications), for linear and ?rbf? kernels and various levels of supervision.
have similar performance, while on semi-supervised problems, and in particular for nonlinear kernels, D IFFRAC outperforms both K-means and RCA. Note that all algorithms work on the same
data representation (linear or kernelized) and that differences are due to the underlying clustering
frameworks.
5.2 Semi-supervised classification
To demonstrate the effectiveness of our method in a semi-supervised learning (SSL) context, we
performed experiments on some benchmarks datasets for SSL described in [19]. We considered the
following datasets: COIL, BCI and Text. We carried out the experiments in a transductive setting,
i.e., the test set coincides with the set of unlabelled samples. This allowed us to conduct a fair
comparison with the low density separation (LDS) algorithm of [19], which is an enhanced version
of the so-called Transductive SVM. However, deriving ?out-of-sample? extensions for our method
is straightforward.
A primary goal in semi-supervised learning is to take into account a large number of labelled points
in order to dramatically reduce the number of labelled points required to achieve a competitive
classification accuracy. Henceforth, our experimental setting consists in observing how fast the
classification accuracy collapses as the number of labelled points increases. The less labelled points
a method needs to achieve decent classification accuracy, the more it is relevant for semi-supervised
learning tasks. As shown in Figure 3, our method yields competitive classification accuracy with
very few labelled points on the three datasets. Moreover, D IFFRAC reaches unexpectedly good results on the Text dataset, where most semi-supervised learning methods usually show disappointing
performance. One explanation might be that D IFFRAC acts as an ?augmented?-clustering algorithm,
whereas most semi-supervised learning algorithms are built as ?augmented?-versions of traditional
supervised learning algorithms such as LDS which is built on SVMs for instance. Hence, for datasets
exhibiting multi-class structure such as Text, D IFFRAC is more able to utilize unlabelled points
since it based on a multi-class clustering algorithm rather than algorithms based on binary SVMs,
where multi-class extensions are currently unclear. Thus, our experiments support the fact that semisupervised learning algorithms built on clustering algorithms augmented with labelled data acting
as hints on clusters are worth for investigation and further research.
6 Conclusion
We have presented a discriminative framework for clustering based on the square loss and penalization through spectral functions of equivalence matrices. Our formulation enables the easy incorporation of semi-supervised constraints, which leads to state-of-the-art performance in semi-supervised
learning. Moreover, our discriminative framework should allow to use existing methods for learning the kernel matrix from data [20]. Finally, we are currently investigating the use of D IFFRAC in
semi-supervised image segmentation. In particular, early experiments on estimating the number of
clusters using variation rates of our discriminative costs are very promising.
Learning curve on Coil100
Learning curve on BCI
DIFFRAC
LDS
Test error
Test error
0.6
0.4
0.2
0
0
Learning curve on Text
0.5
0.35
0.45
0.3
0.4
Test error
0.8
DIFFRAC
LDS
0.35
50
100
150
200
0.25
0
0.25
0.2
0.15
0.3
Number of labelled training points
DIFFRAC
LDS
50
100
150
Number of labelled training points
0.1
0
50
100
150
200
Number of labelled training points
Figure 3: Semi-supervised classification.
References
[1] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In Adv. NIPS, 2004.
[2] T. De Bie and N. Cristianini. Fast SDP relaxations of graph cut clustering, transduction, and other combinatorial problems. J. Mac. Learn. Res., 7:1409?1436, 2006.
[3] K. Zhang, I. W. Tsang, and J. T. Kwok. Maximum margin clustering made practical. In Proc. ICML,
2007.
[4] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag, 2001.
[5] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Camb. Univ. Press, 2004.
[6] A. Frieze and M. Jerrum. Improved approximation algorithms for MAX k-CUT and MAX BISECTION.
In Integer Programming and Combinatorial Optimization, volume 920, pages 1?13. Springer, 1995.
[7] C. Swamy. Correlation clustering: maximizing agreements via semidefinite programming. In ACM-SIAM
Symp. Discrete algorithms, 2004.
[8] A. S. Lewis and H. S. Sendov. Twice differentiable spectral functions. SIAM J. Mat. Anal. App.,
23(2):368?386, 2002.
[9] F R. Bach and M I. Jordan. Learning spectral clustering, with application to speech separation. J. Mac.
Learn. Res., 7:1963?2001, 2006.
[10] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Comp., 10(3):1299?1319, 1998.
[11] N. Srebro, G. Shakhnarovich, and S. Roweis. An investigation of computational and informational limits
in gaussian mixture clustering. In Proc. ICML, 2006.
[12] S. Boyd and L. Vandenberghe. Convex Optimization. Camb. Univ. Press, 2003.
[13] J. F. Bonnans, J. C. Gilbert, C. Lemar?echal, and C. A. Sagastizbal. Numerical Optimization Theoretical
and Practical Aspects. Springer, 2003.
[14] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In Adv. NIPS,
2002.
[15] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In
Proc. AAAI, 2005.
[16] M. Heiler, J. Keuchel, and C. Schn?orr. Semidefinite clustering for image segmentation with a-priori
knowledge. In Pattern Recognition, Proc. DAGM, 2005.
[17] K. Wagstaff, C. Cardie, S. Rogers, and S. Schr?odl. Constrained K-means clustering with background
knowledge. In Proc. ICML, 2001.
[18] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning distance functions using equivalence
relations. In Proc. ICML, 2003.
[19] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In Proc. AISTATS,
2004.
[20] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the SMO
algorithm. In Proc. ICML, 2004.
| 3269 |@word repository:1 version:2 inversion:1 polynomial:3 norm:2 advantageous:1 stronger:1 d2:1 closure:2 simulation:5 decomposition:7 tr:5 configuration:1 ecole:1 denoting:1 ours:1 outperforms:1 existing:1 comparing:2 bie:1 must:5 readily:2 numerical:2 partition:11 enables:1 zaid:1 v:1 fewer:1 selected:1 isotropic:1 steepest:1 short:1 barrault:1 org:1 simpler:1 zhang:1 dn:1 ik:1 replication:3 consists:1 symp:1 indeed:3 themselves:1 p1:1 sdp:3 multi:5 informational:1 spherical:1 little:1 considering:1 project:3 xx:1 moreover:5 bounded:1 panel:2 medium:2 underlying:1 estimating:1 weinshall:1 act:1 concave:2 exactly:1 classifier:1 unit:3 positive:15 limit:2 consequence:1 path:1 inria:1 might:1 twice:1 initialization:3 equivalence:9 relaxing:3 collapse:1 bi:3 averaged:3 practical:3 definite:3 digit:1 procedure:5 empirical:1 boyd:1 projection:1 matching:1 regular:2 specificity:1 get:3 cannot:1 interior:1 context:2 collapsed:2 gilbert:1 equivalent:7 maximizing:2 straightforward:1 convex:20 rectangular:1 isolet:7 deriving:1 vandenberghe:1 embedding:1 exploratory:1 variation:1 enhanced:1 heavily:1 ulm:1 alleviating:1 programming:2 us:1 agreement:1 lanckriet:1 element:6 recognition:1 particularly:1 cut:4 solved:4 unexpectedly:1 tsang:1 adv:2 removed:1 convexity:1 ui:2 complexity:10 cristianini:2 mine:1 shakhnarovich:1 algebra:1 basis:1 easily:4 represented:2 various:3 derivation:1 separated:1 univ:2 fast:2 choosing:1 refined:1 hillel:1 larger:3 distortion:1 otherwise:2 bci:2 jerrum:1 transductive:2 noisy:2 final:2 sequence:3 eigenvalue:15 advantage:1 trbm:2 neufeld:1 differentiable:1 propose:2 fr:1 turned:1 uci:1 relevant:1 achieve:3 roweis:1 intuitive:1 frobenius:1 ky:1 olkopf:1 convergence:1 cluster:22 optimum:1 r1:1 measured:2 eq:2 involves:1 exhibiting:1 modifying:1 centered:1 rogers:1 require:2 bonnans:1 investigation:2 proposition:2 extension:2 considered:1 normal:1 bump:1 early:1 smallest:1 purpose:2 proc:8 combinatorial:4 label:2 currently:2 saw:1 largest:1 tool:1 weighted:1 uller:1 always:4 gaussian:2 aim:2 normale:1 ck:4 rather:1 inherits:1 rank:3 dependent:1 cnrs:1 dagm:1 kernelized:1 relation:1 willow:1 france:2 classification:11 flexible:2 among:1 denoted:3 priori:3 dual:1 art:2 smoothing:1 constrained:3 ssl:2 marginal:2 equal:5 ng:1 unsupervised:3 k2f:1 icml:5 np:1 others:1 hint:1 few:2 randomly:1 composed:1 frieze:1 n1:4 friedman:1 ltci:1 possibility:2 evaluation:1 mixture:1 semidefinite:4 primal:1 amenable:1 heiler:1 partial:1 respective:1 indexed:2 incomplete:1 conduct:1 diffrac:6 taylor:1 y1k:1 bk2f:1 desired:1 plotted:2 re:2 theoretical:1 minimal:2 instance:1 column:2 earlier:1 cost:20 mac:2 deviation:2 subset:2 hundred:1 successful:1 rounding:3 too:1 synthetic:2 my:1 chunk:5 density:2 siam:2 together:1 squared:1 aaai:1 manage:1 positivity:1 henceforth:1 ek:10 leading:3 rescaling:1 account:2 singleton:1 de:1 orr:1 tra:2 satisfy:1 mp:2 depends:1 performed:1 closed:4 observing:1 apparently:2 francis:2 sup:1 reached:1 competitive:2 masking:1 minimize:2 square:5 ni:1 accuracy:5 variance:2 efficiently:1 correspond:2 yield:1 dealt:1 lds:5 handwritten:1 bisection:1 cardie:1 worth:1 comp:1 app:1 minm:3 za:1 reach:1 splitted:1 centering:2 against:1 involved:1 dataset:4 annoyance:1 knowledge:5 cj:2 segmentation:2 back:3 attained:2 originally:1 supervised:18 follow:1 improved:1 wei:1 formulation:4 trm:5 implicit:1 smola:1 correlation:2 working:1 replacing:1 dualize:1 nonlinear:2 defines:1 resemblance:1 semisupervised:1 effect:2 equality:1 regularization:2 hence:1 deal:2 attractive:1 larson:1 coincides:1 criterion:3 trying:1 demonstrate:1 image:2 novel:3 superior:1 camb:2 empirically:1 volume:1 belong:1 extend:1 numerically:1 significant:2 ai:1 rd:2 erieure:1 similarly:1 shawe:1 chapelle:1 similarity:1 supervision:2 add:2 isometry:1 recent:3 irrelevant:2 forcing:1 disappointing:1 verlag:1 binary:1 success:1 seen:2 minimum:7 additional:5 relaxed:4 impose:1 greater:1 maximize:1 semi:17 zien:1 multiple:1 harchaoui:2 faster:1 unlabelled:2 calculation:1 bach:4 a1:1 impact:1 regression:2 vision:1 metric:3 iteration:7 kernel:17 represent:1 whereas:1 background:1 singular:4 sch:1 envelope:1 cedex:1 n7:1 effectiveness:1 jordan:3 integer:1 easy:1 decent:1 hastie:1 competing:1 reduce:2 regarding:1 inner:1 pca:2 penalty:1 speech:2 dramatically:1 generally:2 clear:1 aimed:1 tune:1 eigenvectors:3 coil100:1 svms:2 differentiability:1 reduced:1 dotted:2 per:4 yy:5 tibshirani:1 odl:1 discrete:3 mat:1 shental:1 group:3 prevent:1 pj:1 traa:1 utilize:1 graph:1 relaxation:9 subgradient:3 sum:4 throughout:1 separation:3 scaling:3 followed:1 quadratic:1 nonnegative:2 occur:1 constraint:37 incorporation:1 constrain:2 n3:1 aspect:1 speed:1 min:4 performing:2 separable:2 hertz:1 smaller:2 enst:1 separability:1 invariant:2 rca:4 wagstaff:1 taken:1 unregularized:1 remains:1 turn:1 gaussians:1 operation:2 apply:2 disjoints:1 kwok:1 spectral:9 alternative:1 robustness:1 swamy:1 clustering:41 running:1 include:1 hinge:1 xw:2 k1:3 uj:2 implied:1 already:2 looked:2 primary:1 usual:4 diagonal:3 traditional:1 unclear:1 exhibit:1 distance:4 link:4 card:3 outer:4 kak2f:1 code:1 pointwise:3 minimizing:3 equivalently:1 negative:4 anal:1 perform:1 upper:1 datasets:10 benchmark:1 descent:2 trb:1 situation:2 extended:2 incorporated:2 looking:2 schr:1 rn:9 smoothed:1 bk:2 pair:7 paris:2 namely:4 required:1 optimized:1 schn:1 smo:1 nip:2 able:1 bar:1 usually:1 pattern:2 program:1 built:3 including:1 max:2 explanation:1 rely:1 regularized:3 indicator:6 scheme:1 numerous:1 conic:1 carried:1 transitive:1 text:4 prior:3 loss:5 bear:1 interesting:2 proportional:1 srebro:1 penalization:1 eigendecomposition:1 affine:1 consistent:1 translation:1 row:5 normalizes:2 echal:1 keeping:1 bias:1 allow:1 sendov:1 dualization:1 dimension:10 xn:1 gram:2 curve:3 made:1 approximate:1 keep:2 global:1 investigating:1 b1:2 discriminative:13 table:2 promising:2 learn:2 nature:1 robust:2 symmetry:1 schuurmans:2 rue:2 diag:6 aistats:1 main:2 linearly:3 noise:5 subsample:1 n2:3 allowed:1 fair:1 x1:1 augmented:3 xu:2 telecom:1 referred:4 transduction:1 slow:1 nonnegativity:1 explicit:1 rk:2 removing:1 load:1 specific:1 showing:1 explored:1 svm:2 essential:2 exists:1 mnist:7 adding:1 kx:1 margin:2 sophistication:1 led:1 simply:3 forming:1 expressed:2 springer:3 mij:1 corresponds:3 relies:1 acm:1 lewis:1 coil:1 goal:2 sorted:1 rbf:8 labelled:9 lemar:1 paristech:1 hard:1 included:2 acting:1 lemma:1 principal:1 called:1 duality:1 experimental:1 support:3 cholesky:1 |
2,502 | 327 | Neural Network Implementation of Admission Control
Rodolfo A. Milito, Isabelle Guyon, and Sara A. SoDa
AT&T Bell Laboratories, Crawfords Corner Rd., Holmdel, NJ 07733
Abstract
A feedforward layered network implements a mapping required to control an
unknown stochastic nonlinear dynamical system. Training is based on a
novel approach that combines stochastic approximation ideas with backpropagation. The method is applied to control admission into a queueing system operating in a time-varying environment.
1 INTRODUCTION
A controller for a discrete-time dynamical system must provide, at time tn, a value un for
the control variable. Information about the state of the system when such decision is
made is available through the observable Yn' The value un is determined on the basis of
the current observation Yn and the preceding control action Un-I' Given the information
In = (Yn' Un-I), the controllerimplements a mapping In -+ Un.
Open-loop controllers suffice in static situations which require a single-valued control
policy U : a constant mapping In -+ u?, regardless of In. Closed-loop controllers provide a dynamic control action un, determined by the available information In. This work
addresses the question of training a neural network to implement a general mapping
In -+ Un'
The problem that arises is the lack of training patterns: the appropriate value un for a
given input In is not known. The quality of a given control policy can only be assessed by
using it to control the system, and monitoring system perfonnance. The sensitivity of the
performance to variations in the control policy cannot be investigated analytically, since
the system is unknown. We show that such sensitivity can be estimated within the standard framework of stochastic approximation. The usual back-propagation algorithm is
used to determine the sensitivity of the output un to variations in the parameters W of the
network, which can thus be adjusted so as to improve system performance.
The advantage of a neural network as a closed-loop controller resides in its ability to
accept inputs (In, In-I, ... , I n- p )' The additional p time steps into the past provide information about the history of the controlled system. As demonstrated here, neural network
controllers can capture regularities in the structure of time-varying environments, and are
particularly powerful for tracking time variations driven by stationary stochastic
processes.
493
494
Milito, Guyon, and Solla
2 CONTROL OF STOCHASTIC DYNAMICAL SYSTEMS
Consider a dynamical system for which the state x,. is updated at discrete times
tIl = n B. The control input u,. in effect at time tIl affects the dynamical evolution, and
XII +1 = l(xII ? ull ? ~,.).
(2.1)
Here (f;,,J is a stochastic process which models the intrinsic randomness of the system as
well as external, unmeasurable disturbances. The variable XII is not accessible to direct
measurement, and knowledge about the state of the system is limited to the observable
YII = h(xII )?
(2.2)
Our goal is to design a neural network controller which produces a specific value UII for
the control variable to be applied at time til' given the available information
III == (YII' UII-I)?
In order to design a controller which implements the appropriate control policy III -+ UII ,
a specification of the purpose of controlling the dynamical system is needed. There is
typically a function of the observable.
'II = H(YII)'
(2.3)
which measures system perfonnance. It follows from Eqs. (2.1)-(2.3) that the
composition G = H 0 hoi determines
(2.4)
a function of the state X of the system, the control variable u. and the stochastic variable
~. The quantity of interest is the expectation value of the system performance,
<.III>
= <H(y,.?~.
(2.5)
averaged with respect to ~. This expectation value can be estimated by the long-run
average
(2.6)
'N
since for an ergodic system
-+ <.III> as N -+ 00. The goal of the controller is to
generate a sequence {UII } , 1 ~ n ~ N of control values, such that the average
performance <.III> stabilizes to a desired value , ?.
The parameters W of the neural network are thus to be adapted so as to minimize a cost
function
Neural Network Implementation of Admission Control
E(W) =
2"1 (<.I,,> -J ? )2 .
(2.7)
The dependence of E(W) on W is implicit: the value of <.I,,> depends on the controlling
sequence {U,,} , which depends on the parameters W of the neural network.
On-line training proceeds through a gradient descent update
W,,+1 = WIt -" VwE,,(W),
(2.8)
towards the minimization of the instantaneous deviation
E,,(W) =
"21 (J,,+1
?
2
- J ) .
(2.9)
There is no specified target for the output U" that the controller is expected to provide in
response to the input I" = (y", U,,-I). The output u" can thus be considered as a variable
u, which controls the subsequent performance: J,,+1 = G(x", u, ~,,), as follows from
Eq. (2.4). Then
(2.10)
The factor Vw U measures the sensitivity of the output of the neural network controller to
changes in the internal parameters W: at fixed input I", the output u" is a function only
of the network parameters W. The gradient of this scalar function is easily computed
using the standard back-propagation algorithm (Rumelhart et al, 1986).
The factor dG IdU measures the sensitivity of the system performance J,,+1 to changes in
the control variable. The information about the system needed to evaluate this derivative
is not available: unknown are the functions fwhich describes how X,,+1 is affected by u"
at fixed x"' and the function h which describes how this dependence propagates to the
observable Y,,+I. The algorithm is rendered operational through the use of stochastic
approximation (Kushner, 1971): assuming that the average system performance <.I,,> is a
monotonically increasing function of u, the sign of the partial derivative d<.l,,>ldU is
positive. Stochastic approximation amounts to neglecting the unknown fluctuations of
this derivative with u, and approximating it by a constant positive value, which is then
absorbed in a redefinition of the step size" > O.
The on-line update rule then becomes:
(2.11)
As with stochastic approximation, the on-line gradient update uses the instantaneous
gradient based on the current measurement J,,+I, rather than the gradient of the expected
495
496
Milito, Guyon, and Solla
value <.I,,>, whose deviations with respect to the target J* are to be minimized. The
combined use of back-propagation and stochastic approximation to evaluate VwE,,(W),
leading to the update rule of Eq. (2.11), provides a general and powerful learning rule for
neural network controllers. The only requirement is that the average performance <.I" >
be indeed a monotonic function of the control variable u.
In the following section we illustrate the application of the algorithm to an admission
controller for a traffic queueing problem. The advantage of the neural network over a
standard stochastic approximation approach becomes apparent when the mapping which
produces u" is used to track a time-varying environment generated by a stationary
stochastic process. A straightforward extension of the approach discussed above is used
to train a network to implement a mapping (I", 1,,-1, ... , I,,-p) -+ u", The additional p
time steps into the past provide information on the history of the controlled system, and
allow the network to capture regularities in the time variations of the environment.
3 A TWO-TRAFFIC QUEUEING PROBLEM
Consider an admission controller for a queueing system. As depicted in Fig. 1, the system
includes a server, a queue, a call admission mechanism, and a controller.
"'!,ocal arri_v_a_ls_____--.
a
SERVER
QUEUE
admissions
------------~~
"
~----------re~j~
abandonments
controller
Figure 1: Admission controller for a two-traffic queuing problem.
Neural Network Implementation of Admission Control
The need to serve two independent traffic streams with a single server arises often in
telecommunication networks. In a typical situation. in addition to remote arrivals which
can be monitored at the control node, there are local arrivals whose admission to the
queue can be neither monitored nor regulated. Within this limited information scenario.
the controller must execute a policy that meets specified performance objectives. Such is
the situation we now model.
Two streams are offered to the queueing system: remote traffic and local traffic. Both
streams are Poisson, Le., the interarrival times are independently and exponentially
distributed, with mean l/A. Calls originated by the remote stream can be controlled, by
denying admission to the queue. Local calls are neither controlled nor monitored. While
the arrival rate AR of remote calls is fixed. the rate AL(t) of local calls is time-varying. It
depends on th~ state of a stationary Markov chain to be described later (Kleinrock. 1975).
The service time required by a call of any type is an exponentially distributed random
variable. with mean I Ill.
Calls that find an empty queue on arrival get immediately into service. Otherwise. they
wait in queue. The service discipline is first in first out, non-idling. Every arrival is
assigned a "patience threshold" 'to independently drawn from a fixed but unknown
distribution that characterizes customer behavior. If the waiting time in queue exceeds its
"patience threshold". the call abandons.
Ideally, every incoming call should be admitted. The server. however. cannot process. on
the average. more than Il calls per unit time. Whenever the offered load
P = [AR + AL(t)]/1l approaches or exceeds I, the queue starts to build up. Long
queues result in long delays. which in turn induce heavy abandonments. To keep the
abandonments within tolerable limits. it becomes necessary to reject some remote
arrivals.
The call admission mechanism is implemented via a token-bank (not shown in the figure)
rate control throttle (Berger. 1991). Tokens arrive at the token-bank at a deterministic
rate AT. The token-bank is finite. and tokens that find a full bank are lost A token is
needed by a remote call to be admitted to the queue. and tokens are not reusable. Calls
that find an empty token bank are rejected. Remote admissions are thus controlled
through U=ATIAR'
Local calls are always admitted. The local arrival rate AL(t) is controlled by an
underlying q-state Markov chain. a birth-death process (Kleinrock, 1975) with transition
rate y only between neighboring states. When the Markov chain is in state i. I ~ i ~ q.
the local arrival rate is AL(i).
Complete specification of the state xn of the system at time tn would require information
about number of arrivals. abandonments. and services for both remote and local traffic
during the preceding time interval of duration B = 1. as well as rejections for the
controllable remote traffic, and waiting time for every queued call. But the local traffic is
not monitored. and information on arrivals and waiting times is not accessible. Thus Yn
only contains information about the remote traffic: the number nr of rejected calls. the
number no of abandonments. and the number ns of serviced calls since tn-I' The
information In available at time tn also inCludes the preceding control action Un-I' The
controller uses (/n' In-I, ... , In- p ) to determine un.
497
498
Milito, Guyon, and Solla
The goal of the control policy is to admit as many calls as possible. compatible with a
tolerable rate of abandonment na I AR s;.1. The ratio na I AR thus plays the role of the
performance measure J". and its target value is J* =.1. Values in excess of.1 imply an
excessive number of abandonments and require stricter admission control. Values smaller
than .1 are penalized if obtained at the expense of avoidable rejections.
4 RESULTS
All simulations reported here correspond to a server capable of handling calls at a rate of
J.L = 200 per unit time. The remote traffic arrival rate is AR = 100. The local traffic
arrival rate is controlled by a q = 10 Markov chain with AL(i) = 20i for 1 s; i s; 10.
The offered load thus spans the range 0.6 s; p s; 1.5. in steps of 0.1. Transition rates
y = 0.1. 1. and lOin the Markov chain have been used to simulate slow. moderate. and
rapid variations in the offered load.
The neural network controller receives inputs (I,.. 1,.-1' ... , 1,.-4) at time t,. through 20
input units. A hidden layer with 6 units transmits information to the single output unit.
which provides u,.. The bound for tolerable abandonment rate is set at.1 = 0.1.
To check whether the neural network controller is capable of correct generalization. a
network trained under a time-varying scenario was subject to a static one for testing.
Training takes place under an offered load p varying at a rate of y = 1. The network is
tested at y = 0: the underlying Markov chain is frozen and p is kept fixed for a long
enough period to stabilize the control variable around a fixed value u *. and obtain
statistically meaningful values for nat n,. and ns' A careful numerical investigation of
these quantities as a function of p reveals that the neural network has developed an
adequate control policy: light loads p s; 0.8 spontaneously result in low values of na and
require no control (u = 1.25 guarantees ample token supply. and n, ::: 0). but as p
exceeds 1. the system is controlled by decreasing the value of u below 1. thus increasing
n, to satisfy the requirement na IAR s; .1. Detailed results of the static performance in
comparison with a standard stochastic approximation approach will be reported
elsewhere.
It is in the tracking of a time-varying environment that the power of the neural network
controller is revealed. A network trained under a varying offered load is tested
dynamically by monitoring the distribution of abandonments and rejections as the
network controls an environment varying at the same rate y as used during training. The
abandonment distribution Fa(x) = Prob {nalAR s; xj. shown in Fig. 2 (a) for y = 1.
indicates that the neural network (NN) controller outperforms both stochastic
approximation} (SA) and the uncontrolled system (UN): the probability of keeping the
abandonment rate n, I AR bounded is larger for the NN controller for all values of the
bound x. As for the goal of not exceeding x =.1. it is achieved with probability
Fa(A) = 0.88 by the NN. in comparison to only Fa(.1) = 0.74 with SA or
Fa(A) = 0.51 if UN. The rejection distribution F,(x) = Prob {n,/AR s; xj. shown in
Fig. 2 (b) for y = 1. illustrates the stricter control policy provided by NN. Results for
y = 0.1 and y = 10, not shown here. confirm the superiority of the control policy
I Stochastic approximation with a fixed gain, to enable the controller to track time-varying environments.
The gain was optimized nwnerically.
Neural Network Implementation of Admission Control
developed by the neural network.
ABAHDOHMEHTS DISTRIBUTIOH
REJECTIOHS DISTRIBUTIOH
.9
.9
.8
.8
.7
.7
.S
.S
.5
.5
.4
.4
nec.r1S I
?
.3
contro II er
?
nelrlSl controller
.2
a
stochastic
.1
o
lI'lContro II ad
.3
.2
a
stochastic
.1
o
lI'lControll ad
O~~-+
o
.1
.2
approxi~tion
__~~~~~+-~__- - 4
.3
.4
.5.S
.8
.S
approxi~tion
O~~-+__~~-+~~~_~I~~
o
.1
.2
.3.4
.S.7
.8
.9
Figure 2: (a) Abandonment distribution Fa (X), and (b) rejection distribution Fr(x).
5 CONCLUSIONS
The control of an unknown stochastic system requires a mapping that is implemented
here via a feedforward layered neural network. A novel learning rule, a blend of
stochastic approximation and back-propagation, is proposed to overcome the lack of
training patterns through the use of on-line performance information provided by the
system under control. Satisfactorily tested for an admission control problem, the
approach shows promise for a variety of applications to congestion control in
telecommunication networks.
References
A.W. Berger, "Overload control using a rate control throttle: selecting token capacity for
robustness to arrival rates", IEEE Transactions on Automatic Control 36, 216-219
(1991).
H. Kushner, Stochastic Approximation Methods for Constrained and Unconstrained
Systems, Springer Verlag (1971).
L. Kleinrock, QUEUEING SYSTEMS Volwne I: Theory, John Wiley & Sons (1975).
D.E. Rumelhart, G.E. Hinton, and RJ. Williams, "Learning representations by backpropagating errors", Nature 323, 533-536 (1986).
499
| 327 |@word open:1 simulation:1 denying:1 contains:1 selecting:1 past:2 outperforms:1 current:2 must:2 john:1 numerical:1 subsequent:1 update:4 stationary:3 congestion:1 idling:1 provides:2 node:1 admission:16 direct:1 supply:1 combine:1 indeed:1 expected:2 rapid:1 behavior:1 nor:2 decreasing:1 increasing:2 becomes:3 provided:2 underlying:2 suffice:1 bounded:1 developed:2 nj:1 guarantee:1 every:3 stricter:2 ull:1 control:40 unit:5 yn:4 superiority:1 positive:2 service:4 local:10 limit:1 meet:1 fluctuation:1 dynamically:1 sara:1 limited:2 range:1 statistically:1 averaged:1 satisfactorily:1 spontaneously:1 testing:1 lost:1 implement:4 backpropagation:1 bell:1 reject:1 induce:1 wait:1 get:1 cannot:2 layered:2 queued:1 deterministic:1 demonstrated:1 customer:1 straightforward:1 regardless:1 williams:1 independently:2 duration:1 ergodic:1 wit:1 immediately:1 rule:4 variation:5 updated:1 controlling:2 target:3 play:1 us:2 rumelhart:2 particularly:1 role:1 capture:2 solla:3 remote:11 environment:7 ideally:1 dynamic:1 trained:2 serve:1 basis:1 easily:1 train:1 iar:1 birth:1 whose:2 apparent:1 larger:1 valued:1 otherwise:1 ability:1 abandon:1 advantage:2 sequence:2 frozen:1 fr:1 neighboring:1 yii:3 loop:3 regularity:2 requirement:2 empty:2 produce:2 illustrate:1 sa:2 eq:3 implemented:2 correct:1 stochastic:21 enable:1 hoi:1 require:4 generalization:1 investigation:1 adjusted:1 extension:1 around:1 considered:1 mapping:7 stabilizes:1 purpose:1 minimization:1 always:1 rather:1 varying:10 check:1 indicates:1 nn:4 typically:1 abandonment:12 accept:1 hidden:1 ill:1 constrained:1 excessive:1 minimized:1 dg:1 interest:1 light:1 chain:6 capable:2 partial:1 neglecting:1 necessary:1 perfonnance:2 desired:1 re:1 ar:7 cost:1 deviation:2 delay:1 reported:2 combined:1 sensitivity:5 accessible:2 discipline:1 na:4 corner:1 external:1 ocal:1 derivative:3 til:3 leading:1 admit:1 li:2 stabilize:1 includes:2 satisfy:1 depends:3 stream:4 ad:2 queuing:1 later:1 tion:2 closed:2 traffic:12 characterizes:1 start:1 minimize:1 il:1 correspond:1 interarrival:1 monitoring:2 randomness:1 history:2 whenever:1 transmits:1 monitored:4 static:3 gain:2 knowledge:1 loin:1 back:4 response:1 execute:1 rejected:2 implicit:1 milito:4 receives:1 nonlinear:1 lack:2 propagation:4 quality:1 effect:1 evolution:1 analytically:1 assigned:1 laboratory:1 death:1 during:2 backpropagating:1 complete:1 tn:4 instantaneous:2 novel:2 exponentially:2 discussed:1 measurement:2 isabelle:1 composition:1 rd:1 automatic:1 unconstrained:1 specification:2 operating:1 moderate:1 driven:1 scenario:2 verlag:1 server:5 additional:2 preceding:3 determine:2 period:1 monotonically:1 ii:3 full:1 rj:1 exceeds:3 long:4 controlled:8 controller:25 redefinition:1 expectation:2 poisson:1 achieved:1 addition:1 interval:1 vwe:2 subject:1 ample:1 call:19 vw:1 revealed:1 feedforward:2 iii:5 enough:1 variety:1 xj:2 affect:1 serviced:1 idea:1 whether:1 queue:10 action:3 adequate:1 detailed:1 amount:1 generate:1 sign:1 estimated:2 track:2 per:2 xii:4 discrete:2 promise:1 affected:1 waiting:3 reusable:1 threshold:2 drawn:1 queueing:6 neither:2 kept:1 run:1 prob:2 powerful:2 telecommunication:2 soda:1 arrive:1 place:1 guyon:4 decision:1 patience:2 holmdel:1 layer:1 bound:2 uncontrolled:1 adapted:1 uii:4 simulate:1 span:1 rendered:1 rodolfo:1 kleinrock:3 describes:2 smaller:1 son:1 turn:1 mechanism:2 needed:3 available:5 appropriate:2 tolerable:3 robustness:1 kushner:2 build:1 approximating:1 objective:1 question:1 quantity:2 blend:1 fa:5 dependence:2 usual:1 nr:1 gradient:5 regulated:1 capacity:1 assuming:1 berger:2 ratio:1 expense:1 implementation:4 design:2 policy:9 unknown:6 observation:1 r1s:1 markov:6 finite:1 descent:1 contro:1 situation:3 hinton:1 required:2 specified:2 optimized:1 address:1 proceeds:1 dynamical:6 pattern:2 below:1 power:1 disturbance:1 improve:1 imply:1 crawford:1 throttle:2 offered:6 propagates:1 bank:5 heavy:1 compatible:1 penalized:1 token:10 elsewhere:1 keeping:1 allow:1 distributed:2 overcome:1 xn:1 transition:2 resides:1 made:1 transaction:1 excess:1 observable:4 keep:1 confirm:1 approxi:2 incoming:1 reveals:1 un:13 nature:1 controllable:1 operational:1 investigated:1 arrival:13 fig:3 slow:1 wiley:1 n:2 originated:1 exceeding:1 load:6 specific:1 er:1 intrinsic:1 nec:1 nat:1 illustrates:1 rejection:5 depicted:1 admitted:3 absorbed:1 tracking:2 scalar:1 monotonic:1 springer:1 determines:1 avoidable:1 goal:4 careful:1 towards:1 change:2 determined:2 typical:1 meaningful:1 internal:1 arises:2 assessed:1 overload:1 evaluate:2 tested:3 handling:1 |
2,503 | 3,270 | McRank: Learning to Rank Using Multiple
Classification and Gradient Boosting
Ping Li ?
Dept. of Statistical Science
Cornell University
pingli@cornell.edu
Christopher J.C. Burges
Microsoft Research
Microsoft Corporation
cburges@microsoft.com
Qiang Wu
Microsoft Research
Microsoft Corporation
qiangwu@microsoft.com
Abstract
We cast the ranking problem as (1) multiple classification (?Mc?) (2) multiple ordinal classification, which lead to computationally tractable learning algorithms
for relevance ranking in Web search. We consider the DCG criterion (discounted
cumulative gain), a standard quality measure in information retrieval. Our approach is motivated by the fact that perfect classifications result in perfect DCG
scores and the DCG errors are bounded by classification errors. We propose using the Expected Relevance to convert class probabilities into ranking scores. The
class probabilities are learned using a gradient boosting tree algorithm. Evaluations on large-scale datasets show that our approach can improve LambdaRank [5]
and the regressions-based ranker [6], in terms of the (normalized) DCG scores. An
efficient implementation of the boosting tree algorithm is also presented.
1 Introduction
The general ranking problem has widespread applications including commercial search engines and
recommender systems. We develop McRank, a computationally tractable learning algorithm for the
general ranking problem; and we present our approach in the context of ranking in Web search.
For a given user input query, a commercial search engine returns many pages of URLs, in an order
determined by the underlying proprietary ranking algorithm. The quality of the returned results are
largely evaluated on the URLs displayed in the very first page. The type of ranking problem in this
study is sometimes referred to as dynamic ranking (or simply, just ranking), because the URLs are
dynamically ranked (in real-time) according to the specific user input query. This is different from
the query-independent static ranking based on, for example, ?page rank? [3] or ?authorities and
hubs? [12], which may, at least conceptually, serve as an important ?feature? for dynamic ranking
or to guide the generation of a list of URLs fed to the dynamic ranker.
There are two main categories of ranking algorithms. A popular scheme is based on learning
pairwise preferences, including RankNet [4], LambdaRank [5], RankSVM [11], RankBoost [7],
GBRank [14], and FRank [13]. Both LambdaRank and RankNet used neural nets.1 RankNet used
a cross-entropy type of loss function and LambdaRank used a gradient based on NDCG smoothed
by the RankNet loss. Another scheme is based on regression [6]. [6] considered the DCG measure
(discounted cumulative gain) [10] and showed that the DCG errors are bounded by regression errors.
In this study, we also consider the DCG measure. From the definition of DCG, it appears more direct
to cast the ranking problem as multiple classification (?Mc?) as opposed to regression. In order to
convert classification results into ranking scores, we propose a simple and stable mechanism by
using the Expected Relevance. Our evaluations on large-scale datasets demonstrate the superiority
of the classification-based ranker (McRank) over both the regression-based and pair-based schemes.
2 Discounted Cumulative Gain (DCG)
For an input query, the ranker returns n ordered URLs. Suppose the URLs fed to the ranker are originally ordered {1, 2, 3, ..., n}. The ranker will output a permutation mapping ? : {1, 2, 3, ..., n} ?
{1, 2, 3, ..., n}. We denote the inverse mapping by ?i = ?(i) = ? ?1 (i).
The DCG score is computed from the relevance levels of the n URLs as
DCG =
n
X
i=1
?
1
c[i] (2y?i ? 1) =
n
X
i=1
c[?i ] (2yi ? 1) ,
(1)
Much of the work was conducted while Ping Li was an intern at Microsoft in 2006.
In fact LambdaRank supports any preference function, although the reported results in [5] are for pairwise.
where [i] is the rank order, and yi ? {0, 1, 2, 3, 4} is the relevance level of the ith URL in the
original (pre-ranked) order. yi = 4 corresponds to a ?perfect? relevance and yi = 0 corresponds to
a ?poor? relevance. For generating training datasets, human judges have manually labeled a large
number of queries and URLs. In this study, we assume these labels are ?gold-standard.?
In the definition of DCG, c[i] , which is a non-increasing function of i, is typically set as
c[i] =
1
,
log(1 + i)
and c[i] = 0, if i > L,
if i ? L,
(2)
where L is the ?truncation level? and is typically set to be L = 10, to reflect the fact that the search
quality of commercial search engines is mainly determined by the URLs displayed in the first page.
Suppose a dataset contains NQ queries. It is a common practice to normalize the DCG score for
each query and report the normalized DCG (?NDCG?) score averaged over all queries. In other
words, the NDCG for the jth query (NDCGj ) and the final NDCG of the dataset (NDCGF ) are
NDCGj =
DCGj
,
DCGj,g
NDCGF =
NQ
1 X
NDCGj ,
NQ j=1
(3)
where DCGj,g is the maximum possible (or ?gold standard?) DCG score of the jth query.
3 Learning to Rank Using Classification
The definition of DCG suggests that we can cast the ranking problem naturally as multiple classification (i.e., K = 5 classes), because obviously perfect classifications will lead to perfect DCG
scores. While the DCG criterion is non-convex and non-smooth, classification is very well-studied.
We should mention that one does not really need perfect classifications in order to produce perfect
DCG scores. For example, suppose within a query, the URLs are all labeled level 1 or higher. If
an algorithm always classifies the URLs one level lower (i.e., URLs labeled level 4 are classified as
level 3, and so on), we still have the perfect DCG score but the classification ?error? is 100%. This
phenomenon to an extent, may provide some ?safety cushion? for casting ranking as classification.
[6] cast ranking as regression and showed that the DCG errors are bounded by regression errors. It
appears to us that the regression-based approach is less direct and possibly also less accurate than our
classification-based proposal. For example, it is well-known that, although one can use regression
for classification, it is often better to use logistic regression especially for multiple classification [8].
3.1 Bounding DCG Errors by Classification Errors
Following [6, Theorem 2], we show that the DCG errors can be bounded by classification errors.
For a permutation mapping ?, the error is DCGg - DCG? . One simple way to obtain the perfect
DCGg is to rank the URLs directly according to the gold-standard relevance levels. That is, all
URLs with relevance level k + 1 are ranked higher than those with relevance level ? k; and the
URLs with the same relevance levels are arbitrarily ranked without affecting DCGg . We denote the
corresponding permutation mapping also by g.
Lemma 1 Given n URLs, originally ordered as {1, 2, 3, ..., n}. Suppose a classifier assigns a relevance level y?i ? {0, 1, 2, 3, 4} to the ith URL, for all n URLs. A permutation mapping ? ranks the
URLs according to y?i , i.e., ?(i) < ?(j) if y?i > y?j , and, URL i and URL j are arbitrarily ranked if
y?i = y?j . The corresponding DCG error is bounded by the square root of the classification error,
Proof:
n
n
Y
? X
2/n
c2[i] ? n
c[i]
DCGg ? DCG? ?15 2
DCG? =
n
X
i=1
?
=
n
X
i=1
n
X
i=1
i=1
n
X
c[?i ] (2yi ? 1) =
c[gi ] 2y?i
=DCGg +
i=1
n
X
i=1
1yi 6=y?i
!1/2
.
n
X
c[?i ] 2y?i ? 1 +
c[?i ] 2yi ? 2y?i
i=1
i=1
n
X
?1 +
c[?i ] 2yi ? 2y?i
i=1
c[gi ] (2yi ? 1) ?
n
X
i=1
!1/2
n
X
i=1
n
X
c[gi ] 2yi ? 2y?i +
c[?i ] 2yi ? 2y?i
c[?i ] ? c[gi ]
i=1
2yi ? 2y?i .
(4)
Note that
Pn
i=1 c[?i ]
DCGg ? DCG? ?
?
n
X
i=1
Note that
P
y
?i
2y?i ? 1 ? n
? 1 . Therefore,
i=1 c[gi ] 2
n
X
i=1
c[gi ] ? c[?i ]
2
Pn
=
2
i=1 c[?i ]
c[gi ] ? c[?i ]
!1/2
n
X
Pn
c2[gi ] =
i=1
yi
2
i=1
2yi ? 2y?i
y
?i
?2
Pn
2
2
i=1 c[i] ,
!1/2
?
Qn
2
i=1 c[?i ]
2
n
X
c2[i]
i=1
=
Qn
? 2n
2
i=1 c[gi ]
n
Y
2/n
c[i]
i=1
=
Qn
!1/2
2
i=1 c[i] ,
15
n
X
i=1
1yi 6=y?i
!1/2
and 24 ? 20 = 15.
P
Thus, we can minimize the classification error ni=1 1yi 6=y?i as a surrogate for minimizing the DCG
error. Of course, since the classification error itself is non-convex and non-smooth, we actually
should use other (well-known) surrogate loss functions such as (7).
3.2 Input Data for Classification
A training dataset contains NQ queries. The jth query corresponds to nj URLs; each URL is
manually labeled by one of the K = 5 relevance levels. Engineers have developed methodologies
to construct ?features? by combining the query and URLs, but the details are usually ?trade secret.?
One important aspect in designing features, at least for the convenience of using traditional machine
learning algorithms, is that these features should be comparable across queries. For example, one
(artificial) feature could be the number of times the query appears in the Web page, which is comparable across queries. Both pair-based rankers and regression-based rankers implicitly made this
assumption, as they tried to learn a single rank function for all queries using the same set of features.
Thus, after we have generated feature vectors by combining the queries and URLs, we can create a
PNQ
?training data matrix? of size N ? P , where N = j=1
nj is the total number of ?data points? (i.e.,
Query+URL) and P is the total number of features. This way, we can use the traditional machine
P
learning notation {yi , xi }N
i=1 to denote the training dataset. Here xi ? R is the ith feature vector
in P dimensions; and yi ? {0, 1, 2, 3, 4 = K ? 1} is the class (relevance) label of the ith data point.
3.3 From Classification to Ranking
Although perfect classifications lead to perfect DCG scores, in reality, we will need a mechanism to
convert (imperfect) classification results into ranking scores.
One possibility is already mentioned in Lemma 1. That is, we classify each data point into one of
the K = 5 classes and rank the data points according to the class labels (data points with the same
labels are arbitrarily ranked). This suggestion, however, will lead to highly unstable ranking results.
Our proposed solution is very simple. We first learn the class probabilities by some soft classification
algorithm and then score each data point (query+URL) according to the Expected Relevance.
Recall we assume a training dataset {yi , xi }N
i=1 , where the class label yi ? {0, 1, 2, 3, 4 = K ? 1}.
We learn the class probabilities pi,k = Pr(yi = k), denoted by p?i,k , and define a scoring function:
Si =
K?1
X
p?i,k T (k),
(5)
k=0
where T (k) is some monotone (increasing) function of the relevance level k. Once we have computed the scores Si for all data points, we can then sort the data points within each query by the
descending order of Si . This approach is apparently sensible and highly stable. In fact, we experimented with both T (k) = k and T (k) = 2k ; the performance difference in terms of the NDCG
scores was negligible, although T (k) = k appeared to be a slightly better choice (see Figure 3(c) in
Appendix II). In this paper, the reported experimental results were based on T (k) = k.
When T (k) = k, the scoring function Si is the Expected Relevance. Note that any monotone
transformation on Si (e.g., 2Si ? 1) will not change the ranking results. Consequently, the ranking
results are not affected by any affine transformation on T (k), aT (k) + b, (a > 0), because
K?1
X
k=0
pi,k (a ? T (k) + b) = a ?
K?1
X
k=0
pi,k T (k)
!
+ b,
since
K?1
X
k=0
pi,k = 1.
(6)
3.4 The Boosting Tree Algorithm for Learning Class Probabilities
For multiple classification, we consider the following common (e.g., [8, 9]) surrogate loss function
N K?1
X
X
i=1 k=0
? log(pi,k )1yi =k .
(7)
Algorithm 1 implements a boosting tree algorithm for learning class probabilities pi,k ; and we use
basically the same implementation later for regression as well as multiple ordinal classification.
Algorithm 1 The boosting tree algorithm for multiple classification, taken from [9, Algorithm 6],
although the presentation is slightly different.
0: y?i,k = 1, if yi = k, and y?i,k = 0 otherwise.
1: Fi,k = 0, k = 0 to K ? 1, i = 1 to N
2: For m = 1 to M Do
3:
For k = 0 to K ? 1 Do P
K?1
4:
pi,k = exp(Fi,k )/ s=0 exp(Fi,s )
J
5:
{Rj,k,m }j=1 = J-terminal node regression tree for {?
yi,k ? pi,k , xi }N
i=1
P
y? ?pi,k
K?1 P xi ?Rj,k,m i,k
K
(1?p
i,k )pi,k
x ?R
6:
?j,k,m =
7:
8:
9: End
Fi,k = Fi,k + ?
End
Pi J
j,k,m
j=1
?j,k,m 1xi ?Rj,k,m
There are three main parameters. M is the total number of boosting iterations, J is the tree size
(number of terminal nodes), and ? is the shrinkage coefficient. As commented in [9] and verified in
our experiments, the performance of the algorithm is not sensitive to these parameters.
In Algorithm 1, Line 5 contains most of the implementation work, i.e., building the regression trees
with J terminal nodes. Appendix I describes an efficient implementation for building the trees.
4 Multiple Ordinal Classification to Further Improve Ranking
There is the possibility to (slightly) further improve our classification-based ranking scheme by
taking into account the natural orders among the class labels, i.e., the multiple ordinal classification.
A common approach for multiple ordinal classification is to learn the cumulative probabilities
Pr (yi ? k) instead of the class probabilities Pr (yi = k) = pi,k . We suggest a simple method
similar to the so-called cumulative logits approach known in statistics [1, Section 7.2.1].
We first partition the training data points into two groups: {yi ? 4} and {yi ? 3}. Now we have
a binary classification problem and hence we can use exactly the same boosting tree algorithm for
multiple classification. Thus we can learn Pr (yi ? 3) easily. We can similarly partition the data and
learn Pr (yi ? 2), Pr (yi ? 1), and Pr (yi ? 0), separately. We then infer the class probabilities
pi,k = Pr (yi = k) = Pr (yi ? k) ? Pr (yi ? k ? 1) ,
(8)
and again we use the Expected Relevance to compute the ranking scores and sort the URLs.
We call both rankers based on multiple classification and multiple ordinal classification as McRank.
5 Regression-based Ranking Using Boosting Tree Algorithm
With slight modifications, the boosting tree algorithm can be used for regressions. Recall the input
data are {yi , xi }N
i=1 , where yi ? {0, 1, 2, 3, 4}. [6] suggested regressing the feature vectors xi on
the response values 2yi ? 1.
Algorithm 2 implements the least-square boosting tree algorithm. The pseudo code is similar to [9,
Algorithm 3] by replacing the (l1 ) least absolute deviation (LAD) loss with the (l2 ) least square loss.
In fact, we also implemented the LAD boosting tree algorithm but we found the performance was
considerably worse than the least-square tree boost.
Algorithm 2 The boosting tree algorithm for regressions. After we have learned the values for Si ,
we use them directly as the ranking scores to order the data points within each query.
0: y?i = 2yi ? 1
PN
1: Si = N1
?s , i = 1 to N
s=1 y
2: For m = 1 to M Do
J
5:
{Rj,m }j=1 = J-terminal node regression tree for {?
yi ? Si , xi }N
i=1
6:
?j,m = meanxi ?Rj,m y?i ? Si
PJ
7:
Si = Si + ? j=1 ?j,m 1xi ?Rj,m
9: End
6 Experimental Results
We present the evaluations of 4 ranking algorithms (LambdaRank with two-layer nets, regression,
multiple classification, and multiple ordinal classification) on 4 datasets, including one artificial
dataset and three Web search datasets, denoted by Web-1, Web-2, and Web-3. The artificial dataset
and Web-1 are the same datasets used in [5]. Web-2 is the main dataset used in [13].
For the artificial data and Web-1, [5] reported that LambdaRank improved RankNet by about 1.0 (%)
NDCG. For Web-2, [13] reported that FRank slightly improved RankNet (by about 0.5 (%) NDCG)
and considerably improved RankSVM and RankBoost; but [13] did not compare with LambdaRank.
Our experiment showed that LambdaRank improved FRank by about 0.9 (%) NDCG on Web-2.
6.1 The Datasets
The artificial dataset [5] was meant to remove any variance caused by the quality of features and/or
relevance labels. The data were generated from random cubic polynomials, with 50 features, 50
URLs per query, and 10,000/5,000/10,000 queries for train/validation/test.
The Web search dataset Web-1 [5] has 367 features and 10,000/5,000/10,000 queries for
train/validation/test, with in total 652,500 URLs.
Web-2 [13] has 619 features and 12,000/3,800/3,800 queries for train/validation/test, with in total
1,741,930 URLs. Note that this dataset is only partially labeled with 20 unlabeled URLs per query.
These unlabeled URLs were assigned the level 0 [13].
Web-3 has 450 features and 26,000 queries, with in total 474,590 URLs. We conducted five-fold
cross-validations and report the average NDCG scores.
6.2 The Parameters: M , J, ?
There are three main parameters in the boosting tree algorithm. M is the total number of iterations,
J is the number of terminal nodes in each tree, and ? is the shrinkage factor. Our experiments verify
that these parameters are not sensitive as long as they are within some ?reasonable? ranges [9]. Since
these experiments are time-consuming, we did not tune these parameters (M , J, ?) exhaustively;
but the experiments appear to be convincing enough to establish the superiority of McRank.
[9] suggested setting ? ? 0.1, to avoid over-fitting. We fix ? = 0.05 for the artificial dataset
and Web-1, and fix ? = 0.02 for Web-2 and Web-3. The number of terminal nodes, J, should be
reasonably big (but not too big) when the dataset is large with a large number of features, because
the tree has to be deep enough to consider higher-order interactions [9]. We let J = 10 for the
artificial dataset and Web-1, J = 40 for Web-2, and J = 20 for Web-3.
With these values of J and ?, we did not observe obvious over-fitting even for a very large number
of boosting iterations M . We will report the results with M = 1000 for the artificial data and Web-1,
M = 2000 for Web-2, and M = 1500 for Web-3.
6.3 The Test NDCG Results at Truncation Level L = 10
Table 1 lists the NDCG results (both the mean and standard deviation, in percentages (%)) for all 4
datasets and all 4 ranking algorithms, evaluated at the truncation level L = 10.
The NDCG scores indicate that that McRank (ordinal classification and classification) considerably
improves the regression-based ranker and LambdaRank. If we conduct a one-sided t-test, the im-
Table 1: The test NDCG scores produced by 4 rankers on 4 datasets. The average NDCG scores are presented
in percentages (%) with the standard deviations in the parentheses. Note that for the artificial data and Web-1,
the LambdaRank results were taken directly from [5]. We also report the (one-sided) p-values to measure the
statistical significance of the improvement of McRank over regression and LambdaRank. For the artificial data,
Web-1, and Web-3, we use the ordinal classification results to compute the p-values. However, for Web-2,
because our implementation for testing ordinal classification required too much memory for M = 2000, we
did not obtain the final test NDCG scores; the partial results indicated that ordinal classification did not improve
classification for this dataset. Therefore, we compute the p-values using classification results for Web-2.
Datasets
Ordinal Classification
Classification
Regression, p-value
LambdaRank, p-value
Artificial [5]
Web-1 [5]
Web-2 [13]
Web-3
85.0 (9.5)
72.4 (24.1)
?
72.5 (26.5)
83.7 (9.9)
72.2 (24.1)
75.8 (23.8)
72.4 (27.3)
82.9 (10.2),
71.7 (24.4),
74.7 (24.4),
72.0 (27.6),
74.9, (12.6), 0
71.2 (24.5), 0.0002
74.3 (24.3), 0.003
71.3 (28.8), 3.8 ? 10?7
0
0.021
0.023
0.017
provements are significant at about 98% level. However, multiple ordinal classification did not show
significant improvement over multiple classification, except for the artificial dataset.
For the artificial data, all other 3 rankers exhibit very large improvements over LambaRank. This is
probably due to the fact that the artificial data are generated noise-free and hence the flexible (with
high capacity) rankers using boosting tree algorithms tend to fit the data very well.
6.4 The NDCG Results at Various Truncation Levels (L = 1 to 10)
For the artificial dataset and Web-1, [5] also reported the NDCG scores at various truncation levels,
L = 1 to 10. To make the comparisons more convincing, we also report similar results for the artificial dataset and Web-1, in Figure 1. For a better clarity, we plot the standard deviations separately
from the averages. Figure 1 verifies that the improvements shown in Table 1 are not only true for
L = 10 but also (essentially) true for smaller truncation levels.
Ordinal
Classification
Regression
LambdaRank
70
65
1
2
3
35
8
9
10
Ordinal
Classification
Regression
LambdaRank
30
NDCG std (%)
4
5
6
7
Truncation level
25
20
15
74
Ordinal
Classification
Regression
LambdaRank
2
3
42
4
5
6
7
Truncation level
8
9
38
36
34
32
30
26
2
3
4
5
6
7
Truncation level
8
9
10
24
1
2
3
4
5
6
7
Truncation level
8
9
72
71
68
1
10
46
44
42
40
38
36
34
32
30
28
26
24
22
1
68
66
64
Ordinal
Classification
Regression
LambdaRank
62
Web?2
60
69
10
Web?3
70
73
70
Ordinal
Classification
Regression
LambdaRank
40
72
Classification
Regression
LambdaRank
NDCG (%)
75
28
10
5
1
76
Web?1
2
3
4
5
6
7
Truncation level
8
9
58
1
10
Classification
Regression
LambdaRank
NDCG std (%)
NDCG (%)
75
NDCG std (%)
NDCG (%)
80
73
72
71
70
69
68
67
66
65
64
63
1
NDCG (%)
Artificial
NDCG std (%)
85
2
3
4
5
6
7
Truncation level
8
9
10
46
44
42
40
38
36
34
32
30
28
26
1
2
3
4
5
6
7
Truncation level
8
9
10
Ordinal
Classification
Regression
LambdaRank
2
3
4
5
6
7
Truncation level
8
9
10
Figure 1: The NDCG scores at truncation levels L = 1 to 10, for four datasets. Upper Panels: the
average NDCG scores. Bottom Panels: the corresponding standard deviations.
7 Conclusion
The ranking problem has become an important topic in machine learning, partly due to its
widespread applications in many decision-making processes especially in commercial search engines. In one aspect, the ranking problem is difficult because the measures of rank quality are
usually based on sorting, which is not directly optimizable (at least not efficiently). On the other
hand, one can cast ranking into various classical learning tasks such as regression and classification.
The proposed classification-based ranking scheme is motivated by the fact that perfect classifications
lead to perfect DCG scores and the DCG errors are bounded by the classification errors. It appears
natural that the classification-based ranker is more direct and should work better than the regressionbased ranker suggested in [6]. To convert classification results into ranking, we propose a simple and
stable mechanism by using the Expected Relevance, computed from the learned class probabilities.
To learn the class probabilities, we implement a boosting tree algorithm for multiple classification and we use the same implementation for multiple ordinal classification and regression. Since
commercial proprietary datasets are usually very large, an adaptive quantization-based approach efficiently implements the boosting tree algorithm, which avoids sorting and has lower memory cost.
Our experimental results have demonstrated that McRank (including multiple classification and multiple ordinal classification) outperforms both the regression-based ranker and the pair-based LambdaRank. However, except for the artificial dataset, we did not observe significant improvement of
ordinal classification over classification.
In a summary, we regard McRank algorithm (retrospectively) simple, robust, and capable of producing quality ranking results.
Appendix I
An Efficient Implementation for Building Boosting Trees
We use the standard regression tree algorithm [2], which recursively splits the training data points
into two groups on the current ?best? feature that will reduce the mean square errors (MSE) the most.
Efficient (in both time and memory) implementation needs some care. The standard practice [9] is to
pre-sort all the features; then after every split, carefully keep track of the indexes of the data points
and the sorted orders in all other features for the next split.
We suggest a simpler and more efficient approach, by taking advantage of some properties of the
boosting tree algorithm. While the boosting tree algorithm is well-known to be robust and also
accurate, an individual tree has limited predictive power and usually can be built quite crudely.
When splitting on one feature, Figure 2(a) says that sometimes the split point can be chosen within a
certain range without affecting the accuracy (i.e., the reduced MSE due to the split). In Figure 2(b),
we bin (quantize) the data points into two (0/1) levels on the horizontal (i.e., feature) axis. Suppose
we choose the quantization as shown in the Figure 2(b), then the accuracy will not be affected either.
y
Bin length
y
y
sL s sR
x
Bin 0
s
Bin 1
x
x
0 1
(a)
(b)
2 3
4 5 6 7 8 9
10 11 12
(c)
Figure 2: To split on one feature (x), we seek a split point s on x such that after the splitting, the
mean square error (MSE, in the y axis) of the data points at the left plus the MSE at the right is
reduced the most. Panel (a) suggests that in some cases we can choose s in a range (within sL and
sR ) without affecting the reduced MSE. Panel (b) suggests that, if we bin the data on the x axis to
be binary, the reduced MSE will not be affected either, if the data are binned in the way as in (b).
Panel (c) pictures an adaptive binning scheme to make the accuracy loss (if any) as little as possible.
Of course, we would not know ahead of time how to bin the data to avoid losing accuracy. Therefore,
we suggest an adaptive quantization scheme, pictured in Figure 2(c), to make the accuracy loss (if
any) as little as possible. In the pre-processing stage, for each feature, the training data points are
sorted according to the feature value; and we bin the feature values in the sorted order. We start with
a very small initial bin length, e.g., 10?8 . As shown in Figure 2(c), we only bin the data where there
are indeed data, because the boosting tree algorithm will not consider the area where there are no
data anyway. We set an allowed maximum number of bins, denoted by B. If the bin length is so
small that we need more than B bins, we simply increment the bin length and re-do the quantization.
After the quantization, we replace the original feature value by the bin labels (0, 1, 2, ...). Note that
since we start with a small bin length, the ordinal categorical features are naturally taken care of.
This simple binning scheme is very effective particularly for the boosting tree algorithm:
? It simplifies the implementation. After the quantization, there is no need for sorting (and
keeping track of the indexes after splitting) because we conduct ?bucket sort? implicitly.
? It speeds up the computations for the tree-building step, the bottleneck of the algorithm.
? It reduces the memory cost for training. For example, if we set the maximum allowed
number of bins to be B = 28 , we only need one byte per data entry.
? It does not really result in loss of accuracy. We experimented with both B = 28 = 256 and
B = 216 = 65536; and we did not observe real differences in the NDCG scores, although
reported experimental results were all based on B = 216 . See Appendix II, Figure 3(a)(b).
Appendix II
Some More Experiments on Web-1
Figure 3 (a)(b) present the experiment with our adaptive quantization scheme on Web-1 dataset. We
binned the data with the maximum bin number B = 23 , 24 , 25 , 26 , 27 , 28 , and 216 . In (a) and (b),
the horizontal axis is the ?exponent? of B. Panel (a) plots the relative number of total bins in Web-1
as a function of the exponent, normalized by the total number of bins at B = 216 . Panel (b) plots
the ?NDCG loss? due to the quantization, relative to the NDCG scores at B = 216 . When B = 28 ,
the total number of bins is only about 6% of that when B = 216 ; however, both quantization levels
achieved the same test NDCG scores. Besides the benefit of computational efficiency, quantization
can also be considered as a way of ?regularization? to slow down the training, as reflected in (b).
0
10
77
1
?1
10
?2
10
75
NDCG (%)
NDCG Loss (%)
Percentage of total bins
76
0
?1
Train
Validation
Test
?2
10
2 3 4 5 6 7 8
Max bin number (Exponent)
(a)
16
Train
74
Validation
73
72
Test
71
?3
?3
Relevance
Gain
70
?4
2 3 4 5 6 7 8
Max bin number (Exponent)
16
69
1
(b)
200
400
600
Iteration
800
1000
(c)
Figure 3: Web-1. (a)(b): Experiment with our adaptive quantization scheme. (c): Experiment with
two different scoring functions.
Figure 3 (c) compares two scoring functions to convert learned class probabilities into rankPK?1
ing scores, including the Expected Relevance Si =
?i,k k and the Expected Gain Si =
k=0 p
PK?1
k
?i,k 2 ? 1 . Panel (c) suggests that using the Expected Relevance is consistently better
k=0 p
than using the Expected Gain but the differences are small, especially for the test NDCG scores.
References
[1] A. Agresti. Categorical Data Analysis. John Wiley & Sons, Inc., Hoboken, NJ, second edition, 2002.
[2] L. Brieman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. 1983.
[3] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In WWW, pages 107?117, 1998.
[4] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In
ICML, pages 89?96, 2005.
[5] C. Burges, R. Ragno, and Q. Le. Learning to rank with nonsmooth cost functions. In NIPS, pages 193?200, 2007.
[6] D. Cossock and T. Zhang. Subset ranking using regression. In COLT, pages 605?619, 2006.
[7] Y. Freund, R. Iyer, R. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning
Research, 4:933?969, 2003.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, 28(2):337?
407, 2000.
[9] J. Friedman. Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5):1189?1232, 2001.
[10] K. J?arvelin and J. Kek?al?ainen. IR evaluation methods for retrieving highly relevant documents. In SIGIR, pages 41?48, 2000.
[11] T. Joachims. Optimizing search engines using clickthrough data. In KDD, pages 133?142, 2002.
[12] J. Kleinberg. Authoritative sources in a hyperlinked environment. In SODA, pages 668?677, 1998.
[13] M. Tsai, T. Liu, T. Qin, H. Chen, and W. Ma. Frank: a ranking method with fidelity loss. In SIGIR, pages 383?390, 2007.
[14] Z. Zheng, K. Chen, G. Sun, and H. Zha. A regression framework for learning ranking functions using relative relevance judgments. In
SIGIR, pages 287-294, 2007.
| 3270 |@word polynomial:1 mcrank:9 seek:1 tried:1 mention:1 recursively:1 initial:1 liu:1 contains:3 score:32 document:1 outperforms:1 current:1 com:2 si:14 hoboken:1 john:1 additive:1 partition:2 kdd:1 remove:1 plot:3 ainen:1 greedy:1 nq:4 ith:4 renshaw:1 boosting:26 authority:1 node:6 preference:3 simpler:1 zhang:1 five:1 c2:3 direct:3 become:1 retrieving:1 fitting:2 pairwise:2 secret:1 indeed:1 expected:10 terminal:6 discounted:3 little:2 increasing:2 classifies:1 bounded:6 underlying:1 notation:1 panel:8 developed:1 transformation:2 corporation:2 nj:3 pseudo:1 every:1 exactly:1 classifier:1 superiority:2 appear:1 producing:1 hamilton:1 safety:1 negligible:1 ndcg:34 plus:1 studied:1 dynamically:1 suggests:4 limited:1 range:3 averaged:1 testing:1 practice:2 implement:4 area:1 deed:1 pre:3 word:1 suggest:3 convenience:1 unlabeled:2 context:1 shaked:1 lambdarank:22 descending:1 www:1 demonstrated:1 convex:2 sigir:3 splitting:3 assigns:1 anyway:1 increment:1 annals:2 commercial:5 suppose:5 user:2 losing:1 designing:1 particularly:1 std:4 labeled:5 binning:2 bottom:1 sun:1 trade:1 mentioned:1 environment:1 dynamic:3 exhaustively:1 arvelin:1 predictive:1 serve:1 efficiency:1 easily:1 various:3 train:5 effective:1 query:29 artificial:18 quite:1 agresti:1 say:1 otherwise:1 statistic:3 gi:9 itself:1 final:2 obviously:1 advantage:1 hyperlinked:1 net:2 propose:3 interaction:1 qin:1 relevant:1 combining:3 gold:3 normalize:1 produce:1 generating:1 perfect:13 develop:1 implemented:1 judge:1 indicate:1 anatomy:1 human:1 brin:1 bin:23 fix:2 really:2 im:1 considered:2 ranksvm:2 exp:2 mapping:5 label:8 sensitive:2 create:1 always:1 rankboost:2 pn:5 avoid:2 cornell:2 shrinkage:2 casting:1 joachim:1 improvement:5 consistently:1 rank:11 mainly:1 dcg:32 typically:2 classification:73 among:1 flexible:1 denoted:3 exponent:4 colt:1 fidelity:1 construct:1 once:1 qiang:1 manually:2 icml:1 report:5 nonsmooth:1 individual:1 microsoft:7 n1:1 friedman:3 possibility:2 highly:3 zheng:1 evaluation:4 regressing:1 accurate:2 capable:1 partial:1 tree:32 conduct:2 re:1 classify:1 soft:1 cost:3 deviation:5 entry:1 lazier:1 subset:1 conducted:2 too:2 reported:6 considerably:3 again:1 reflect:1 opposed:1 choose:2 possibly:1 worse:1 return:2 li:2 account:1 coefficient:1 inc:1 caused:1 ranking:39 later:1 root:1 view:1 apparently:1 start:2 sort:4 zha:1 provements:1 square:6 ir:1 minimize:1 ni:1 variance:1 largely:1 efficiently:2 accuracy:6 kek:1 judgment:1 conceptually:1 produced:1 basically:1 mc:2 classified:1 ping:2 definition:3 obvious:1 naturally:2 proof:1 static:1 gain:6 dataset:20 popular:1 recall:2 improves:1 carefully:1 actually:1 appears:4 originally:2 higher:3 methodology:1 response:1 improved:4 reflected:1 evaluated:2 just:1 cushion:1 stage:1 crudely:1 hand:1 horizontal:2 web:43 christopher:1 replacing:1 widespread:2 logistic:2 quality:6 indicated:1 building:4 normalized:3 verify:1 logits:1 true:2 hence:2 assigned:1 regularization:1 criterion:2 stone:1 demonstrate:1 l1:1 fi:5 common:3 cossock:1 slight:1 lad:2 significant:3 similarly:1 stable:3 showed:3 optimizing:1 certain:1 binary:2 arbitrarily:3 yi:40 scoring:4 care:2 ii:3 multiple:23 rj:6 infer:1 reduces:1 smooth:2 ing:1 cross:2 long:1 retrieval:1 parenthesis:1 regression:38 essentially:1 iteration:4 sometimes:2 achieved:1 proposal:1 affecting:3 separately:2 source:1 sr:2 probably:1 tend:1 call:1 split:7 enough:2 fit:1 hastie:1 imperfect:1 reduce:1 simplifies:1 ranker:16 bottleneck:1 motivated:2 url:35 returned:1 proprietary:2 ranknet:6 deep:1 tune:1 category:1 reduced:4 schapire:1 sl:2 percentage:3 per:3 track:2 tibshirani:1 affected:3 commented:1 group:2 four:1 clarity:1 pj:1 verified:1 monotone:2 convert:5 inverse:1 soda:1 reasonable:1 wu:1 decision:1 appendix:5 comparable:2 layer:1 fold:1 hypertextual:1 binned:2 ahead:1 kleinberg:1 aspect:2 speed:1 ragno:1 according:6 poor:1 across:2 slightly:4 describes:1 smaller:1 son:1 modification:1 making:1 retrospectively:1 pr:10 bucket:1 sided:2 taken:3 computationally:2 mechanism:3 singer:1 ordinal:23 know:1 tractable:2 fed:2 end:3 optimizable:1 observe:3 original:2 especially:3 establish:1 classical:1 pingli:1 already:1 traditional:2 surrogate:3 exhibit:1 gradient:5 capacity:1 sensible:1 topic:1 extent:1 unstable:1 code:1 length:5 cburges:1 index:2 besides:1 minimizing:1 convincing:2 difficult:1 olshen:1 frank:4 implementation:9 clickthrough:1 recommender:1 upper:1 datasets:12 descent:1 displayed:2 smoothed:1 pnq:1 cast:5 pair:3 required:1 engine:6 learned:4 boost:1 nip:1 suggested:3 usually:4 appeared:1 built:1 including:5 memory:4 max:2 power:1 ranked:6 natural:2 pictured:1 scheme:10 improve:4 picture:1 axis:4 categorical:2 hullender:1 byte:1 l2:1 relative:3 freund:1 loss:12 permutation:4 generation:1 suggestion:1 validation:6 authoritative:1 affine:1 pi:13 course:2 summary:1 regressionbased:1 truncation:15 free:1 jth:3 keeping:1 guide:1 burges:3 taking:2 absolute:1 benefit:1 regard:1 dimension:1 cumulative:5 avoids:1 qn:3 made:1 adaptive:5 implicitly:2 keep:1 consuming:1 xi:10 search:11 reality:1 table:3 learn:7 reasonably:1 robust:2 quantize:1 mse:6 did:8 significance:1 main:4 pk:1 bounding:1 big:2 noise:1 edition:1 verifies:1 allowed:2 referred:1 cubic:1 slow:1 wiley:1 theorem:1 down:1 specific:1 hub:1 list:2 experimented:2 quantization:11 iyer:1 sorting:3 chen:2 entropy:1 simply:2 intern:1 ordered:3 partially:1 corresponds:3 ma:1 sorted:3 presentation:1 consequently:1 replace:1 change:1 determined:2 except:2 lemma:2 engineer:1 total:11 called:1 partly:1 experimental:4 support:1 meant:1 relevance:24 tsai:1 dept:1 phenomenon:1 |
2,504 | 3,271 | Combined discriminative and generative articulated
pose and non-rigid shape estimation
Leonid Sigal
Alexandru Balan
Michael J. Black
Department of Computer Science
Brown University
Providence, RI 02912
{ls, alb, black}@cs.brown.edu
Abstract
Estimation of three-dimensional articulated human pose and motion from images
is a central problem in computer vision. Much of the previous work has been
limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. Automatic initialization of
such models has proved difficult and most approaches assume that the size and
shape of the body parts are known a priori. In this paper we propose a method for
automatically recovering a detailed parametric model of non-rigid body shape and
pose from monocular imagery. Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range
scans. We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors.
This predicted pose and shape are used to initialize a generative model for more
detailed pose and shape estimation. The resulting approach allows fully automatic
pose and shape recovery from monocular and multi-camera imagery. Experimental results show that our method is capable of robustly recovering articulated pose,
shape and biometric measurements (e.g. height, weight, etc.) in both calibrated
and uncalibrated camera environments.
1
Introduction
We address the problem of marker-less articulated pose and shape estimation of the human body
from images using a detailed parametric body model [3]. Most prior work on marker-less pose
estimation and tracking has concentrated on the use of generative Baysian methods [8, 15] that
exploit crude models of body shape (e.g. cylinders [8, 15], superquadrics, voxels [7]). We argue
that a richer representation of shape is needed to make future strides in building better generative
models. Discriminative methods [1, 2, 10, 13, 16, 17], more recently introduced specifically for
the pose estimation task, do not address estimation of the body shape; in fact, they are specifically
designed to be invariant to body shape variations. Any real-world system must be able to estimate
both body shape and pose simultaneously.
Discriminative approaches to pose estimation attempt to learn a direct mapping from image features to 3D pose from either a single image [1, 14, 17] or multiple approximately calibrated views
[9]. These approaches tend to use silhouettes [1, 9, 14] and sometimes edges [16, 17] as image
features and learn a probabilistic mapping in the form of Nearest Neighbor (NN) search, regression
[1], mixture of regressors [2], mixture of Baysian experts [17], or specialized mappings [14]. While
effective and fast, they are inherently limited by the amount and the quality of the training data.
More importantly they currently do not address estimation of the body shape itself. Body shape estimation (independent of the pose) has many applications in biometric authentication and consumer
application domains.
1
Simplified models of body shape have a long history in computer vision and provide a relatively low
dimensional description of the human form. More detailed triangulated mesh models obtained from
laser range scans have been viewed as too high dimensional for vision applications. Moreover, mesh
models of individuals lack a convenient, low-dimensional, parameterization to allow fitting to new
subjects. In this paper we use the SCAPE model (Shape Completion and Animation of PEople) [3]
which provides a low-dimensional parameterized mesh that is learned from a database of 3D range
scans of different people. The SCAPE model captures correlated body shape deformations of the
body due to the identity of the person and their non-rigid muscle deformation due to articulation.
This model has been shown to allow tractable estimation of parameters from multi-view silhouette
image features [5, 11] and from monocular images in scenes with point lights and cast shadows [4].
In [5] the SCAPE model is projected into multiple calibrated images and an iterative importance
sampling method is used for inference of the pose and shape that best explain the observed silhouettes. Alternatively, in [11] visual hulls are constructed from many silhouette images and the
Iterative Closest Point (ICP) algorithm is used to extract the pose by registering the volumetric features with SCAPE. Both [5] and [11], however, require manual initialization to bootstrap estimation.
In this paper we substitute discriminative articulated pose and shape estimation in place of manual
initialization. In doing so, we extend the current models for discriminative pose estimation to deal
with the estimation of shape, and couple the discriminative and generative methods for more robust
combined estimation. Few combined discriminative and generative pose estimation methods that
exist [16], typically require temporal image data and do not address shape estimation problem.
For discriminative pose and shape estimation we use a Mixture of Experts model, with kernel linear
regression as experts, to learn a direct probabilistic mapping between monocular silhouette contour
features and the SCAPE parameters. To our knowledge this is the first work that has attempted to
recover the 3D shape of the human body from monocular image directly. While the results are typically noisy, they are appropriate as initialization for the more precise generative refinement process.
For generative optimization we make use of the method proposed in [5] where the silhouettes are
predicted in multiple views given the pose and shape parameters of the SCAPE model and are compared to the observed silhouettes using a Chamfer distance measure. For training data we use the
SCAPE model to generate pairs of 3D body shapes and projected image silhouettes. Evaluation is
performed on sequences of two subjects performing free-style motion. We are able to predict pose,
shape, and simple biometric measurements for the subjects from images captured by 4 synchronized
cameras. We also show results for 3D shape estimation from monocular images.
The contributions of this paper are two fold: (1) we formulate a discriminative model for estimating
the pose and shape directly from monocular image features, and (2) we couple this discriminative
method with a generative stochastic optimization for detailed estimation of pose and the shape.
2
SCAPE Body Model
In this section we briefly introduce the SCAPE body model; for details the reader is referred to [3].
A low-dimensional mesh model is learned using principal component analysis applied to a registered
database of range scans. The SCAPE model is defined by a set of parameterized deformations that
are applied to a reference mesh that consists of T triangles {?xt |t ? [1, ..., T ]} (here T = 25, 000).
Each of the triangles in the reference mesh is defined by three vertices in 3D space, (vt,1 , vt,2 , vt,3 ),
and has a corresponding associated body part index pt ? [1, ..., P ] (we work with the model that
has P = 15 body parts corresponding to torso, pelvis, head, and 3 segments for each of the upper
and lower extremities). For convenience, the triangles of the mesh are parameterized by the edges,
?xt = (vt,2 ? vt,1 , vt,3 ? vt,1 ), instead of the vertices themselves. Estimating the shape and
articulated pose of the body amounts to estimating parameters, Y, of the deformations required to
produce the mesh {?yt |t ? [1, ..., T ]}, the projection of which matches the image evidence. The
state-space of the model can be expressed by a vector Y = {?, ?, ?}, where ? ? R3 is the global
3D position for the body, ? ? R37 is the joint-angle parameterization of the articulation with respect
to the skeleton (encoded using Euler angles), and ? ? R9 is the shape parameters encoding the
identity-specific shape of the person. Given a set of estimated parameters Y a new mesh {?yt } can
be produced using:
?yt = Rpt (?)S(?)Q(Rpt (?))?xt
2
(1)
pn
1
10
9
11
8
12
7
6
1
5
Radial bins
2
pc
3
4
2
4
3
5
15
(a)
45
75
105
135
165
195
225
? bins (in degrees)
255
285
315
345
(b)
Figure 1: Silhouette contour descriptors. Radial Distance Function (RDF) encoding of the silhouette contour is illustrated in (a); Shape Context (SC) encoding of a contour sample point in (b).
where Rpt (?) is the rigid 3 ? 3 rotation matrix for a part pt and is a function of the joint angles ?;
S(?) is the linear 3?3 transformation matrix modeling subject-specific shape variation as a function
of the shape-space parameters ?; Q(Rpt (?)) is a 3 ? 3 residual transformation corresponding to the
non-rigid articulation-induced deformations (e.g. bulging of muscles). Notice, that Q() is simply
a learned linear function of the rigid rotation and has no independent parameters. To learn Q()
we minimize the residual in the least-squared sense between the set of 70 registered scans of one
person under different (but known) articulations. It is also worth mentioning that body shape linear
deformation sub-space, S(?) = Us ? + ?s , is learned from a set of 10 meshes of different people
in full correspondence using PCA; hence ? can be interpreted as a vector of linear coefficients
corresponding to eigen-directions of the shape-space that characterize a given body shape.
3
Features
In this work we make use of silhouette features for both discriminative and generative estimation of
pose and shape. Silhouettes are commonly used for human pose estimation [1, 2, 13, 15, 17]; while
limited in their representational power, they are easy to estimate from images and fast to synthesize
from a mesh model. The framework introduced here, however, is general and can easily be extended
to incorporate richer features (e.g. edges [15], dense region descriptors [16] such as SIFT or HOG,
or hierarchical descriptors [10] like HMAX, Hyperfeatures, Spatial Pyramid). The use of such richer
feature representations will likely improve both discriminative and generative estimation.
Histograms of shape context. Shape contexts (SC) [6] are rich descriptors based on the local
shape-based histograms of the contour points sampled from the external boundary of the silhouette.
At every sampled boundary point the shape context descriptor is parameterized by the number of
orientation bins, ?, number of radial-distance bins, r, and the minimum and maximum radial distances denoted by rin and rout respectively. As in [1] we achieve scale invariance by making rout
a function of the overall silhouette height and normalizing the individual shape context histogram
by the sum over all histogram bins. Assuming that N contour points are chosen, at random, to encode the silhouette, the full feature vector can be represented using ?rN bin histogram. Even for
moderate values of N this produces high dimensional feature vectors that are hard to deal with.
To reduce the silhouette representation to a more manageable size, a secondary histogramming was
introduced by Agarwal and Triggs in [1]. In this, bag-of-words style model, the shape context space
is vector quantized into a set of K clusters (a.k.a. codewords). The K = 100 center codebook is
learned by running k-means clustering on the combined set of shape context vectors obtained from
the large set of training silhouettes. Once the codebook is learned, the quantized K-dimensional
histograms are obtained by voting into the histogram bins corresponding to codebook entries. Soft
voting has been shown [1] to reduce effects of spatial quantization. The final descriptor Xsc ? RK
is normalized to unit length, to ensure that silhouettes that contain different number of contour points
can be compared.
The resulting codebook shape context representation is translation and scale invariant by definition.
Following the prior work [1, 13] we let ? = 12, r = 5, rin = 3, and rout = ?h where h is the height
of the silhouette and ? is typically 14 ensuring integration of contour points over regions roughly
similar to the limb size [1]. For shape estimation, we found that combining features across multiple
spatial scales (e.g. ? = { 14 , 21 , ...}) to be more effective.
3
Radial distance function. The Radial Distance Function (RDF) features are defined by a feature
vector Xrdf = {pc , ||p1 ?pc ||, ||p2 ?pc ||, ..., ||pN ?pc ||}, where pc ? R2 is the centroid of the image
silhouette, and pi is the point on the silhouette outer contour; hence ||pi ? pc || ? R measures the
maximal object extent in the particular direction denoted by i from the centroid. For all experiments,
we use N = 100 points, resulting in the Xrdf ? R102 . We explicitly ensure that the dimensionality
of the RDF descriptor is comparable to that of shape context introduced above. Unlike the shape
context descriptor, the RDF feature vector is neither scale nor translation invariant. Hence, RDF
features are only suited for applications where camera calibration is known and fixed.
4
Discriminative estimation of pose and shape
To produce initial estimates for the body pose and/or shape in 3D from image features, we need to
model the conditional distribution p(Y|X) of the 3D body state Y given the set of 2D features X.
Intuitively this conditional mapping should be related to the inverse of the camera projection matrix
and, as with many inverse problems, is highly ambiguous. To model this non-linear relationship we
use a Mixtures of Experts (MoE) model to represent the conditionals [2, 17].
The parameters of the MoE model are learned by maximizing the log-likelihood of the training data
set D = {(x(1) , y (1) ), ..., (x(N ) , y (N ) )} consisting of N input-output pairs (x(i) , y (i) ). We use an
iterative Expectation Maximization (EM) algorithm, based on type-II maximum likelihood, to learn
parameters of the MoE. Our model for the conditional can be written as:
p(Y|X) ?
M
X
pe,k (Y|X, ?e,k )pg,k (k|X, ?g,k )
(2)
k=1
where pe,k is the probability of choosing pose Y given the input X according to the k-th expert, and
pg,k is the probability of that input being assigned to the k-th expert using an input sensitive gating
network; in both cases ? represents the parameters of the mixture and gate distributions respectively.
For simplicity and to reduce complexity of the experts we choose kernel linear regression with
constant offset, Y = ?X + ?, as our expert model, which allows us to solve for the parameters
?e,k = {?k , ?k , ?k } analytically using the weighted linear regression, where pe,k (Y|X, ?e,k ) =
T ?1
1
? 1n
exp? 2 ?k ?k ?k , and ?k = Y ? ?k X ? ?k .
(2?) |?k |
Pose estimation is a high dimensional and ill-conditioned problem, so simple least squares estimation of the linear regression matrix parameters typically produces severe over-fitting and poor generalization. To reduce this, we add smoothness constraints on the learned mapping. We use a damped
regularization term R(?) = ?||?||2 that penalizes large values in the coefficient matrix ?, where ? is
a regularization parameter. Larger values of ? will result in overdamping, where the solution will be
underestimated, small values of ? will result in overfitting and possibly ill-conditioning. Since the
solution of the ridge regressors is not symmetric under the scaling of the inputs, we normalize the
inputs {x(1) , x(2) , ..., x(N ) } by the standard deviation in each dimension respectively before solving.
Weighted ridge regression solution for the parameters ?k and ?k can be written in matrix notation
as follows,
T T
?1 T
?k
DX diag(Zk ) DX + diag(?) Zk
DX
=
diag(Zk ) DY ,
(3)
?k
ZkT
ZkT Zk
ZkT
(1)
(2)
(N )
where Zk = [zk , zk , ..., zk ]T is the vector of ownership weights described later in the section
and diag(Zk ) is diagonal matrix with Zk on the diagonal; DX = [x(1) , x(2) , ..., x(N ) ] and DY =
[y (1) , y (2) , ..., y (N ) ] are vectors of inputs and outputs from the training data D.
Maximization for the gate parameters can be done analytically as well. Given the gate model,
T ?1
1
pg,k (k|X, ?g,k ) = ? 1n
exp? 2 (X??k ) ?k (X??k ) maximization of the gate parameters
(2?) |?k |
?g,k = {?k , ?k } becomes similar to the mixture of Gaussians estimation, where ?k =
PN
PN
(n)
(n) (n) PN
(n)
/ n=1 zk , ?k = PN 1 (n) n=1 zk [x(n) ? ?k ][x(n) ? ?k ]T and zkn is the
n=1 zk x
n=1
zk
4
estimated ownership weight of the example n by the expert k estimated by expectation
(n)
zk
pe,k (y (n) |x(n) , ?e,k )pg,k (k|x(n) , ?g,k )
= PM
.
(n) |x(n) , ? )p
(n) , ?
e,j g,j (j|x
g,j )
j=1 pe,j (y
(4)
The above outlines the full EM procedure for the MoE model. We learn three separate models for
shape, p(?|X), articulated pose, p(?|X) and global position, p(? |X). Similar to [2] we initialize the
EM learning by clustering the output 3D poses using the K-means procedure.
Implementation details. For articulated pose and shape we experimented with using both RDF
and SC features (global position requires RDF features since SC is location and scale invariant).
SC features tend to work better for pose estimation where as RDF features perform better for shape
estimation. Hence, we learn p(?|Xrdf ), p(?|Xsc ) and p(? |Xrdf ). In cases where calibration is
unavailable, we estimate the shape using p(?|Xsc ) which tends to produce reasonable results but
cannot estimate the overall height. We estimate the number of mixture components, M , and regularization parameter, ?, by learning a number of models and cross validating on the withheld dataset.
5
Generative stochastic optimization of pose and shape
Generative stochastic state estimation, as in [5], is handled within an iterative importance sampling
framework [8]. To this end, we represent the posterior distribution over the state (that includes
both pose and shape), p(Y|I) ? p(I|Y)p(Y), using a set of N weighted samples {yi , ?i }N
i=1 ,
i )p(yi )
where yi ? q(Y) is a sample drawn from the importance function q(Y) and ?i ? p(I|y
q(yi )
is an associated normalized weight. As in [5] we make no rigorous probabilistic claims about the
generative model, but rather use it as effective means of performing stochastic search. As required
by the annealing framework, we define a set of importance functions qk (Y) from which we draw
samples at each respective iteration k. We define importance functions recursively using a smoothed
PN (k)
(k)
version of posterior from the previous iteration qk+1 (Y) = i=1 ?i N (yi , ?(k) ), encoded using
a kernel Gaussian density with iteration dependent bandwidth parameter ?(k) . To avoid effects of
local optima, the likelihood is annealed as follows: pk (I|Y) = [p(I|Y)]Tk at every iteration, where
Tk is the temperature parameter. As a result, effects of peaks in the likelihood are introduced slowly.
To initiate the stochastic search an initial distribution is needed. The high dimensionality of the
state space requires this initial distribution to be relatively close to the solution in order to reach
convergence. Here we make use of the discriminative pose and shape estimate from Section 4 to
give us the initial distribution for the posterior. In particular, given the discriminative model for the
shape, p(?|X), position, p(? |X), and articulated pose, p(?|X), of the body, we can let (with slight
(0)
(0)
abuse of notation) yi ? [p(? |X), p(?|X), p(?|X)] and ?i = 1/N for i ? [1, ..., N ].
The outlined stochastic optimization framework also requires an image likelihood function, p(I|Y),
that measures how well our model under a given state Y matches the image evidence, I, obtained
from one or multiple synchronized cameras. We adopt the likelihood function introduced in [5]
that measures the similarity between observed and hypothesized silhouettes. For a given camera
view, a foreground silhouette is computed using a shadow-suppressing background subtraction procedure and is compared to the silhouette obtained by projecting the SCAPE model subject to the
hypothesized state into the image plane (given calibration parameters of the camera). Pixels in the
non-overlapping regions are penalized by the distance to the closest contour point of the silhouette.
This is made efficient by the use of Chamfer distance map precomputed for both silhouettes.
6
Experiments
Datasets. In this paper we make use of 3 different datasets. The training dataset, used to learn
discriminative MoE models and codeword dictionary for SC, was generated by synthesizing 3000
silhouette images obtained by projecting corresponding SCAPE body models into an image plane
using calibration parameters of the camera. SCAPE body models, in turn, were generated by randomly sampling the pose from a database of motion capture data (consisting of generally non-cyclic
random motions) and the body shape coefficient from a uniform distribution centered at the mean
shape. Similar synthetic test dataset was constructed consisting of 597 silhouette-SCAPE body
5
(a)
(b)
(c)
Figure 2: Discriminative estimation of weight loss. Two images of a subject before and after
weight loss are shown in (a) on the left and right respectively. The images were downloaded from
the web (Google) and manually segmented (b). The estimated shape and pose obtained by our
discriminative estimation procedure is shown in (c). In bottom row, we manually rotated the model
90 degrees for better visibility of the shape variation. Since camera calibration is unavailable, we
use p(?|Xsc ) and normalize the before and after shapes to the same reference height. Our method
estimated that the person illustrated in the top row lost 22 lb and the one illustrated in the bottom
row ? 32 lb; web-reported weight loss for the two subjects was 24 lb and 64 lb respectively. Notice
that the neutral posture assumed in images was not present in our training data set, causing visible
artifacts with estimation of the arm pose. Also, the bottom example pushes the limits of our current
shape model which was trained using only 10 scans of people, none close to the desired body shape.
model pairs. In addition, we collected a real dataset consisting of hardware-synchronized motion
capture and video collected using 4 cameras. Two subjects were captured performing roughly the
same class of motions as in the training dataset.
Discriminative estimation of shape. Results of using the MoE model, similar to the one introduced
here, for pose estimation have previously been reported in [2] and [17]. Our experience with the
articulated pose estimation was similar and we omit supporting experiments due to lack of space.
For discriminative estimation of shape we quantitatively compared SC and RDF features, by training
two MoE models p(?|Xsc ) and p(?|Xrdf ), and found the latter to perform better when camera
calibration is available (on the average we achieve a 19.3 % performance increase over simply using
the mean shape). We attribute the superior performance of RDF features to their sensitivity to the
silhouette position and scale, that allows for better estimation of overall height of the body.
Given the shape we can also estimate the volume of the body and assuming constant density of
water, compute the weight of the person. To illustrate this we estimate approximate weight loss of
a person from monocular uncalibrated images (see Figure 2). Please note that this application is a
proof of concept and not a rigorous experiment1 . In principle, the SCAPE model is not ideal for
weight calculations, since non-rigid deformations caused by articulations of the body will result in
(unnatural) variations in weight. In practice, however, we found such variations produce relatively
minor artifacts. The weight calculations are, on the other hand, very sensitive to the body shape.
Combining discriminative and generative estimation. Lastly we tested the performance of the
combined discriminative and generative framework by estimating articulated pose, shape and biometric measurements for people in our real dataset. Results of biometric measurement estimates
can be seen in Figure 3; corresponding visual illustration of results is shown in Figure 4.
Analysis of errors. Rarely our system does produce poor pose and/or shape estimates. Typically
these cases can be classified into two categories: (1) minor errors that only effect the pose and are
artifacts of local optima or (2) more significant errors that effect the shape and result from poor initial
distribution over the state produced by the discriminative method. The latter arise as a result of 180?
degree view ambiguity and/or pose configuration ambiguities, due to symmetry, in the silhouettes.
1
The ?ground truth? weight change here is self reported and gathered from the Internet.
6
B (30) A (34)
Biometric Feature
Height (mm)
Arm Span (mm)
Weight (kg)
Height (mm)
Arm Span (mm)
Weight (kg)
Actual
1780
1597
88
1825
1668
63
Discriminative
Mean
Std
1716.1 41.9
1553.6 39.7
83.62
8.94
1703.8 88.8
1537.7 69.2
80.63 18.53
Disc. + Generative
Mean
Std
1776.2
43.8
1597.3
58.0
83.37
8.01
1751.0
95.2
1547.5
91.4
64.98
9.27
GT + Generative
Mean
Std
1796.9
22.9
1607.7
30.7
85.83
3.73
1844.1
63.8
1659.0
29.1
66.33
4.69
Figure 3: Estimating basic biometric measurements. Figure illustrates basic biometric measurements (height, arm span3 and weight) recovered for two subjects A and B. Mean and standard deviation reported over 34 and 30 frames for subject A and B respectively. Every 25-th frame from two
sequence obtained using 4 synchronized cameras was chosen for estimation. The actual measured
values for the two subjects are shown in the left column. Estimates obtained using discriminative
only and discriminative followed by generative shape estimation methods are reported in the next
two columns. Discriminative method used only one view for estimation, where as generative method
used all 4 views to obtain a better fit. Last column reports estimates obtained using ground truth pose
and mean shape as initialization for the generative fit (this is the algorithm proposed in [5]). Notice
that generative estimation significantly refines the discriminative estimates. In addition, our approach, that unlike [5] does not require manual initialization, performs comparably (and sometimes
marginally better than [5]) in terms of mean performance (but has roughly twice the variance).
7
Discussion and Conclusions
We have presented a method for automatic estimation of articulated pose and shape of people from
images. Our approach goes beyond prior work in that it is able to estimate a detailed parametric
model (SCAPE) directly from images without requiring manual intervention or initialization. We
found that the discriminative model produced an effective initialization for generative optimization
procedure and that biometric measurements from the recovered shape were comparable to those produced by prior approaches that required manual initialization [5]. We also introduced and addressed
the problem of discriminative estimation of shape from monocular calibrated and un-calibrated images. More accurate shape estimates from monocular data will require richer image descriptors.
A number of straightforward extensions to our model will likely yeld immediate improvement in
performance. Among such, is the use of temporal consistency in the discriminative pose (and perhaps shape) estimation [17] and dense image descriptors [10]. In addition, in this work we estimated
the shape space of the SCAPE model from only 10 body scans, as a result the learned shape space
is rather limited in its expressive power. We belive some of the artifacts of this can be observed in
Figure 2 where the weight of the heavier woman is underestimated.
Acknowledgments. This work was supported by NSF grants IIS-0534858 and IIS-0535075 and a
gift from Intel Corp. We also thank James Davis and Dragomir Anguelov for discussions and data.
References
[1] A. Agarwal and B. Triggs. Recovering 3D human pose from monocular images, IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol. 28, No. 1, pp. 44?58, 2006.
[2] A. Agarwal and B. Triggs. Monocular human motion capture with a mixture of regressors, IEEE Workshop on Vision for Human-Computer Interaction, 2005.
[3] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers and J.Davis. SCAPE: Shape Completion and
Animation of PEople, ACM Transactions on Graphics (SIGGRAPH), Vol. 24(3), pp. 408?416, 2005.
[4] A. Balan, M. J. Black, H. Haussecker and L. Sigal. Shining a light on human pose: On shadows, shading
and the estimation of pose and shape, International Conference on Computer Vision (ICCV), 2007.
[5] A. Balan, L. Sigal, M. Black, J. Davis and H. Haussecker. Detailed human shape and pose from images,
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.
[6] S. Belongie, J. Malik and J. Puzicha. Matching shapes, ICCV, pp. 454?461, 2001.
3
Arm span is defined as the distance between knuckles of left and right arm fully extended in ?T?-pose [5].
7
Subject A
Subject B
Figure 4: Visualizing pose and shape estimation. Examples of simultaneous pose and shape
estimation for subjects A and B are shown on top and bottom respectively. Results are obtained by
discriminatively estimating the distribution over the initial state and then refining this distribution
via generative local stochastic search. Left column illustrates projection of the estimated model into
all 4 views. Middle column shows the projection of the model onto image silhouettes, where light
blue denotes image silhouette, dark red projection of the model and orange non-silhouette regions
that overlap with the projection. On the right are the two views of the estimated 3D model.
[7] K. M. Cheung, S. Baker and T. Kanade. Shape-from-silhouette of articulated objects and its use for
human body kinematics estimation and motion capture, CVPR, Vol. 1, pp. 77?84, 2003.
[8] J. Deutscher, A. Blake and I. Reid. Articulated body motion capture by annealed particle filtering, IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2, pp. 126?133, 2000.
[9] K. Grauman, G. Shakhnarovich, T. Darrell. Inferring 3D structure with a statistical image-based shape
model, IEEE International Conference on Computer Vision (ICCV), pp. 641?648, 2003.
[10] A. Kanaujia, C. Sminchisescu and D. Metaxas. Semi-supervised Hierarchical Models for 3D Human Pose
Reconstruction, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.
[11] L. Muendermann, S. Corazza and T. Andriacchi. Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models, CVPR, 2007.
[12] R. Plankers and P. Fua. Articulated soft objects for video-based body modeling, ICCV, 2001.
[13] R. W. Poppe and M. Poel. Comparison of silhouette shape descriptors for example-based human pose
recovery, IEEE Conference on Automatic Face and Gesture Recognition (FG 2006), pp. 541?546, 2006.
[14] R. Rosales and S. Sclaroff. Learning Body Pose Via Specialized Maps, NIPS, 2002.
[15] L. Sigal, S. Bhatia, S. Roth, M. J. Black and M. Isard Tracking Loose-limbed People, IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), Vol. 1, pp. 421?428, 2004.
[16] C. Sminchisescu, A. Kanajujia and D. Metaxas. Learning Joint Top-Down and Bottom-up Processes for
3D Visual Inference, CVPR, Vol. 2, pp. 1743?1752, 2006.
[17] C. Sminchisescu, A. Kanaujia, Z. Li and D. Metaxas. Discriminative density propagation for 3D human
motion estimation, CVPR, Vol. 1, pp. 390?397, 2005.
8
| 3271 |@word repository:1 version:1 briefly:1 manageable:1 middle:1 triggs:3 pg:4 shading:1 recursively:1 initial:6 cyclic:1 configuration:1 suppressing:1 current:2 recovered:2 dx:4 must:1 written:2 refines:1 visible:1 mesh:12 shape:94 visibility:1 designed:1 generative:25 intelligence:1 isard:1 parameterization:2 plane:2 provides:1 quantized:2 codebook:4 location:1 height:9 registering:1 constructed:2 direct:2 consists:1 fitting:2 introduce:1 roughly:3 themselves:1 p1:1 nor:1 multi:2 automatically:1 actual:2 becomes:1 gift:1 estimating:6 moreover:1 notation:2 baker:1 kg:2 interpreted:1 transformation:2 temporal:2 every:3 voting:2 grauman:1 unit:1 grant:1 omit:1 intervention:1 reid:1 before:3 local:4 tends:1 limit:1 encoding:3 extremity:1 approximately:1 abuse:1 black:5 twice:1 initialization:9 mentioning:1 limited:4 range:4 acknowledgment:1 camera:13 lost:1 practice:1 bootstrap:1 procedure:5 significantly:1 convenient:1 projection:6 word:1 radial:6 matching:1 convenience:1 cannot:1 close:2 onto:1 context:10 map:2 yt:3 center:1 maximizing:1 annealed:2 go:1 straightforward:1 l:1 roth:1 formulate:1 simplicity:1 recovery:2 importantly:1 variation:5 pt:2 synthesize:1 recognition:5 std:3 database:4 observed:4 bottom:5 capture:6 region:4 r37:1 movement:1 uncalibrated:2 environment:1 complexity:1 skeleton:1 trained:1 solving:1 segment:1 shakhnarovich:1 rin:2 triangle:3 easily:1 joint:4 siggraph:1 represented:2 articulated:18 laser:1 fast:2 effective:4 sc:7 bhatia:1 choosing:1 richer:4 encoded:2 solve:1 larger:1 cvpr:8 itself:1 noisy:1 final:1 sequence:2 propose:1 reconstruction:1 interaction:1 maximal:1 causing:1 combining:2 achieve:2 representational:1 description:1 normalize:2 convergence:1 cluster:1 optimum:2 darrell:1 produce:7 rotated:1 object:3 tk:2 illustrate:1 completion:2 pose:61 measured:1 nearest:1 minor:2 p2:1 recovering:3 c:1 predicted:2 shadow:3 rosales:1 triangulated:2 synchronized:4 direction:2 alexandru:1 attribute:1 hull:1 stochastic:7 zkt:3 human:17 centered:1 bin:7 require:4 generalization:1 extension:1 mm:4 ground:2 blake:1 exp:2 mapping:6 predict:1 claim:1 dictionary:1 adopt:1 estimation:53 bag:1 currently:1 sensitive:2 weighted:3 gaussian:1 rather:2 pn:7 avoid:1 encode:1 refining:1 improvement:1 likelihood:6 rigorous:2 centroid:2 sense:1 inference:2 dependent:1 rigid:7 nn:1 typically:5 koller:1 biometric:9 overall:3 pixel:1 orientation:1 ill:2 denoted:2 priori:1 histogramming:1 among:1 spatial:3 integration:1 initialize:2 orange:1 once:1 sampling:3 manually:2 represents:1 foreground:1 future:1 report:1 quantitatively:1 few:1 randomly:1 simultaneously:1 individual:2 consisting:4 attempt:1 cylinder:2 highly:1 evaluation:1 severe:1 mixture:9 light:3 pc:7 damped:1 accurate:1 edge:3 capable:1 experience:1 respective:1 penalizes:1 desired:1 deformation:7 column:5 modeling:2 soft:3 measuring:1 maximization:3 vertex:2 entry:1 euler:1 deviation:2 uniform:1 neutral:1 too:1 graphic:1 characterize:1 reported:5 providence:1 synthetic:1 combined:5 calibrated:5 person:6 density:3 peak:1 sensitivity:1 international:2 probabilistic:3 michael:1 icp:2 imagery:2 central:1 squared:1 ambiguity:2 choose:1 possibly:1 slowly:1 woman:1 external:1 expert:9 style:2 li:1 stride:1 includes:1 coefficient:3 explicitly:1 caused:1 performed:1 view:9 later:1 doing:1 red:1 recover:2 contribution:1 minimize:1 square:1 descriptor:11 qk:2 variance:1 gathered:1 metaxas:3 accurately:1 produced:4 disc:1 none:1 comparably:1 marginally:1 worth:1 history:1 classified:1 explain:1 simultaneous:1 reach:1 manual:5 volumetric:1 definition:1 pp:10 james:1 associated:2 proof:1 haussecker:2 couple:2 sampled:2 proved:1 dataset:6 knowledge:1 dimensionality:2 torso:1 supervised:1 fua:1 done:1 lastly:1 hand:1 web:2 expressive:1 marker:2 lack:2 overlapping:1 google:1 propagation:1 quality:1 artifact:4 alb:1 perhaps:1 building:1 effect:5 hypothesized:2 requiring:1 brown:2 normalized:2 contain:1 concept:1 hence:4 assigned:1 analytically:2 regularization:3 symmetric:1 rpt:4 illustrated:3 deal:2 visualizing:1 authentication:1 self:1 please:1 ambiguous:1 davis:3 outline:1 ridge:2 demonstrate:1 performs:1 motion:10 temperature:1 dragomir:1 image:39 recently:1 superior:1 rotation:2 specialized:2 conditioning:1 volume:1 extend:1 slight:1 rodgers:1 measurement:7 significant:1 anguelov:2 smoothness:1 automatic:4 outlined:1 pm:1 consistency:1 particle:1 calibration:6 similarity:1 etc:1 add:1 gt:1 closest:2 posterior:3 moderate:1 codeword:1 corp:1 vt:7 yi:6 muscle:2 captured:2 minimum:1 seen:1 subtraction:1 scape:18 ii:3 semi:1 multiple:5 full:3 segmented:1 match:2 calculation:2 cross:1 long:1 gesture:1 ensuring:1 regression:6 basic:2 vision:10 expectation:2 histogram:7 represent:3 kernel:4 sometimes:2 pyramid:1 agarwal:3 iteration:4 background:1 conditionals:1 addition:3 annealing:1 underestimated:2 addressed:1 unlike:2 subject:14 tend:2 induced:1 validating:1 ideal:1 easy:1 fit:2 bandwidth:1 reduce:4 pca:1 handled:1 heavier:1 unnatural:1 zkn:1 generally:1 detailed:7 amount:2 dark:1 concentrated:1 hardware:1 category:1 generate:1 exist:1 hyperfeatures:1 nsf:1 notice:3 estimated:8 blue:1 vol:7 srinivasan:1 drawn:1 neither:1 rout:3 sum:1 limbed:1 angle:3 parameterized:5 inverse:2 place:1 reader:1 reasonable:1 draw:1 dy:2 scaling:1 comparable:2 internet:1 followed:1 correspondence:1 fold:1 constraint:2 ri:1 scene:1 span:3 performing:3 relatively:3 deutscher:1 xsc:5 department:1 according:1 poor:3 across:1 em:3 making:1 intuitively:1 invariant:4 projecting:2 iccv:4 monocular:13 previously:1 turn:1 r3:1 precomputed:1 kinematics:1 needed:2 initiate:1 loose:1 tractable:1 end:1 available:1 gaussians:1 limb:1 hierarchical:2 appropriate:1 robustly:1 eigen:1 gate:4 substitute:1 top:3 running:1 clustering:2 ensure:2 denotes:1 exploit:1 kanaujia:2 malik:1 posture:1 codewords:1 parametric:3 diagonal:2 distance:9 separate:1 thank:1 thrun:1 outer:1 argue:1 extent:1 collected:2 water:1 consumer:1 assuming:2 length:1 index:1 relationship:1 illustration:1 difficult:1 hog:1 synthesizing:1 implementation:1 perform:2 upper:1 datasets:2 withheld:1 rdf:10 supporting:1 immediate:1 extended:2 precise:1 head:1 frame:2 rn:1 smoothed:1 lb:4 introduced:8 cast:1 pair:3 required:3 moe:7 baysian:2 learned:10 registered:2 nip:1 address:4 able:3 beyond:1 pattern:5 articulation:5 video:2 power:2 overlap:1 residual:2 arm:6 improve:1 extract:1 prior:4 voxels:1 fully:2 loss:4 discriminatively:1 filtering:1 downloaded:1 degree:3 sigal:4 principle:1 pi:2 translation:2 row:3 balan:3 penalized:1 supported:1 last:1 free:1 allow:2 neighbor:1 face:1 fg:1 boundary:2 dimension:1 world:1 contour:10 rich:1 collection:1 refinement:1 regressors:4 simplified:1 projected:2 commonly:1 made:1 experiment1:1 transaction:2 approximate:1 silhouette:35 global:3 overfitting:1 assumed:1 belongie:1 discriminative:33 alternatively:1 search:4 iterative:4 un:1 kanade:1 learn:8 zk:15 robust:1 inherently:1 symmetry:1 unavailable:2 sminchisescu:3 domain:1 diag:4 pk:1 dense:2 animation:2 arise:1 body:41 referred:1 intel:1 sub:1 position:5 inferring:1 crude:2 pe:5 hmax:1 rk:1 shining:1 down:1 chamfer:2 xt:3 specific:2 sift:1 gating:1 r2:1 offset:1 experimented:1 evidence:2 normalizing:1 workshop:1 quantization:1 importance:5 conditioned:1 push:1 illustrates:2 sclaroff:1 suited:1 simply:2 likely:2 visual:3 expressed:1 tracking:2 truth:2 acm:1 conditional:4 viewed:1 identity:2 cheung:1 ownership:2 leonid:1 hard:1 change:1 specifically:3 principal:1 r9:1 secondary:1 invariance:1 experimental:1 attempted:1 rarely:1 puzicha:1 people:8 latter:2 scan:7 incorporate:1 tested:1 correlated:1 |
2,505 | 3,272 | Discriminative Keyword Selection Using Support
Vector Machines
W. M. Campbell, F. S. Richardson
MIT Lincoln Laboratory
Lexington, MA 02420
wcampbell,frichard@ll.mit.edu
Abstract
Many tasks in speech processing involve classification of long term characteristics
of a speech segment such as language, speaker, dialect, or topic. A natural technique for determining these characteristics is to first convert the input speech into
a sequence of tokens such as words, phones, etc. From these tokens, we can then
look for distinctive sequences, keywords, that characterize the speech. In many
applications, a set of distinctive keywords may not be known a priori. In this
case, an automatic method of building up keywords from short context units such
as phones is desirable. We propose a method for the construction of keywords
based upon Support Vector Machines. We cast the problem of keyword selection
as a feature selection problem for n-grams of phones. We propose an alternating filter-wrapper method that builds successively longer keywords. Application
of this method to language recognition and topic recognition tasks shows that the
technique produces interesting and significant qualitative and quantitative results.
1 Introduction
A common problem in speech processing is to identify properties of a speech segment such as
the language, speaker, topic, or dialect. A typical solution to this problem is to apply a detection
paradigm. A set of classifiers is applied to a speech segment to produce a decision. For instance, for
language recognition, we might construct detectors for English, French, and Spanish. The maximum
scoring detector on a speech segment would be the predicted language.
Two basic categories of systems have been applied to the detection problem. A first approach uses
short-term spectral characteristics of the speech and models these with Gaussian mixture models
(GMMs) or support vector machines (SVMs) directly producing a decision. Although quite accurate,
this type of system produces only a classification decision with no qualitative interpretation. A
second approach uses high level features of the speech such as phones and words to detect the
properties. An advantage of this approach is that, in some instances, we can explain why we made a
decision. For example, a particular phone or word sequence might indicate the topic. We adopt this
latter approach for our paper.
SVMs have become a common method of extracting high-level properties of sequences of speech
tokens [1, 2, 3, 4]. Sequence kernels are constructed by viewing a speech segment as a document of
tokens. The SVM feature space in this case is a scaling of co-occurrence probabilities of tokens in
an utterance. This technique is analogous to methods for applying SVMs to text classification [5].
SVMs have been applied at many linguistic levels of tokens as detectors. Our focus in this paper
is at the acoustic phone level. Our goal is to automatically derive long sequences of phones which
?
This work was sponsored by the Department of Homeland Security under Air Force Contract FA872105-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not
necessarily endorsed by the United States Government.
1
we call keywords which are characteristic of a given class. Prior work, for example, in language
recognition [6], has shown that certain words are a significant predictor of a language. For instance,
the presence of the phrase ?you know? in a conversational speech segment is a strong indicator of
English. A difficulty in using words as the indicator of the language is that we may not have available
a speech-to-text (STT) system in all languages of interest. In this case, we?d like to automatically
construct keywords that are indicative of the language. Note that a similar problem can occur in
other property extraction problems. For instance, in topic recognition, proper names not in our STT
system dictionary may be a strong indicator of topic.
Our basic approach is to view keyword construction as a feature selection problem. Keywords are
composed of sequences of phones of length n, i.e. n-grams. We would like to find the set of
n-grams that best discriminates between classes. Unfortunately, this problem is difficult to solve
directly, since the number of unique n-grams grows exponentially with increasing n. To alleviate
this difficultly, we propose a method that starts with lower order n-grams and successively builds
higher order n-grams.
The outline of the paper is as follows. In Section 2.1, we review the basic architecture that we use
for phone recognition and how it is applied to the problem. In Section 2.2, we review the application
of SVMs to determining properties. Section 3.1 describes a feature selection method for SVMs.
Section 3.2 presents our method for constructing long context units of phones to automatically create keywords. We use a novel feature selection approach that attempts to find longer strings that
discriminate well between classes. Finally, in Section 4, we show the application of our method
to language and topic recognition problems. We show qualitatively that the method produces interesting keywords. Quantitatively, we show that the method produces keywords which are good
discriminators between classes.
2 Phonotactic Classification
2.1 Phone Recognition
The high-level token extraction component of our system is a phone recognizer based upon the Brno
University (BUT) design [7]. The basic architecture of this system is a monophone HMM system
with a null grammar. Monophones are modeled by three states. This system uses two powerful
components to achieve high accuracy. First, split temporal context (STC) features provide contextual
cues for modeling monophones. Second, the BUT recognizer extensively uses discriminatively
trained feedforward artificial neural networks (ANNs) to model HMM state posterior probabilities.
We developed a phone recognizer for English units using the BUT architecture and automatically
generated STT transcripts on the Switchboard 2 Cell corpora [8]. Training data consisted of approximately 10 hours of speech. ANN training was accomplished using the ICSI Quicknet package [9].
The resulting system has 49 monophones including silence.
The BUT recognizer is used along with the HTK HMM toolkit [10] to produce lattices. Lattices
encode multiple hypotheses with acoustic likelihoods. From a lattice, a 1-best (Viterbi) output can
be produced. Alternatively, we use the lattice to produce expected counts of tokens and n-grams of
tokens.
Expected counts of n-grams can be easily understood as an extension of standard counts. Suppose
we have a hypothesized string of tokens, W = w1 , ? ? ? , wn . Then bigrams are created by grouping two tokens at a time to form, W2 = w1 _w2 , w2 _w3 , ? ? ? , wn?1 _wn . Higher order n-grams
are formed from longer juxtapositions of tokens. The count function for a given bigram, di , is
count(di |W2 ) is the number of occurrences of di in the sequence W2 . To extend counts to a lattice,
L, we find the expected count over all all possible hypotheses in the lattice,
count(di |L) = EW [count(di |W )] =
X
W ?L
p(W |L) count(di |W ).
(1)
The expected counts can be computed efficiently by a forward-backward algorithm; more details
can be found in Section 3.3 and [11].
2
A useful application of expected counts is to find the probability of an n-gram in a lattice. For a
lattice, L, the joint probability of an n-gram, di , is
count(di |L)
p(di |L) = P
j count(dj |L)
(2)
where the sum in (2) is performed over all unique n-grams in the utterance.
2.2 Discriminative Language Modeling: SVMs
We focus on token-based language recognition with SVMs using the approach from [1, 4]. Similar
to [1], a lattice of tokens, L, is modeled using a bag-of-n-grams approach. Joint probabilities of
the unique n-grams, dj , on a per conversation basis are calculated, p(dj |L), see (2). Then, the
probabilities are mapped to a sparse vector with entries
Dj p(dj |W ).
(3)
The selection of the weighting, Dj , in (3) is critical for good performance. A typical choice is of the
form
1
Dj = min Cj , gj
(4)
p(dj |all)
where gj (?) is a function which squashes the dynamic range, and Cj is a constant. The probability p(dj |all) in (4) is calculated from the observed probability across all classes. The squashing
function should monotonically map the interval
[1, ?) to itself to suppress large inverse probabili?
ties. Typical choices for gj are gj (x) = x and gj (x) = log(x) + 1. In both cases, the squashing
function gj normalizes out the typicality of a feature across all classes. The constant Cj limits the
effect of any one feature on the kernel inner product. If ?
we set Cj = 1, then this makes Dj = 1 for
all j. For the experiments in this paper, we use gj (x) = x, which is suited to high frequency token
streams.
The general weighting of probabilities is then combined to form a kernel between two lattices,
see [1] for more details. For two lattices, L1 and L2 , the kernel is
X
K(L1 , L2 ) =
Dj2 p(dj |L1 )p(dj |L2 ).
(5)
j
Intuitively, the kernel in (5) says that if the same n-grams are present in two sequences and the
normalized frequencies are similar there will be a high degree of similarity (a large inner product).
If n-grams are not present, then this will reduce similarity since one of the probabilities in (5) will be
zero. The normalization Dj insures that n-grams with large probabilities do not dominate the kernel
function. The kernel can alternatively be viewed as a linearization of the log-likelihood ratio [1].
Incorporating the kernel (5) into an SVM system is straightforward. SVM training and scoring
require only a method of kernel evaluation between two objects that produces positive definite kernel
matrices (the Mercer condition). We use the package SVMTorch [12]. Training is performed with a
one-versus-all strategy. For each target class, we group all remaining class data and then train with
these two classes.
3 Discriminative Keyword Selection
3.1 SVM Feature Selection
A first step towards an algorithm for automatic keyword generation using phones is to examine
feature selection methods. Ideally, we would like to select over all possible n-grams, where n is
varying, the most discriminative sequences for determining a property of a speech segment. The
number of features in this case is prohibitive, since it grows exponentially with n. Therefore, we
have to consider alternate methods.
As a first step, we examine feature selection for fixed n and look for keywords with n or less phones.
Suppose that we have a set of candidate keywords. Since we are already using an SVM, a natural
algorithm for discriminative feature selection in this case is to use a wrapper method [13].
3
Suppose that the optimized SVM solution is
X
f (X) =
?i K(X, Xi ) + c
(6)
i
and
w=
X
?i b(Xi )
(7)
i
where b(Xi ) is the vector of weighted n-gram probabilities in (3). We note that the kernel presented
in (5) is linear. Also, the n-gram probabilities have been normalized in (3) by their probability across
the entire data set. Intuitively, because of this normalization and since f (X) = wt b(X) + c, large
magnitude entries in w correspond to significant features.
A confirmation of this intuitive idea is the algorithm of Guyon, et. al. [14]. Guyon proposes an
iterative wrapper method for feature selection for SVMs which has these basic steps:
? For a set of features, S, find the SVM solution with model w.
? Rank the features by their corresponding model entries wi2 . Here, wi is the ith entry of w
in (7).
? Eliminate low ranking features using a threshold.
The algorithm may be iterated multiple times.
Guyon?s algorithm for feature selection can be used for picking significant n-grams as keywords.
We can create a kernel which is the sum of kernels as in (5) up to the desired n. We then train an
SVM and rank n-grams according to the magnitude of the entries in the SVM model vector, w.
As an example, we have looked at this feature selection method for a language recognition task
with trigrams (to be described in Section 4). Figure 1 provides a motivation for the applicability of
Guyon?s feature selection method. The figure shows two functions. First, the cumulative density
function (CDF) of the SVM model values, |wi |, is shown. The CDF has an S-curve shape; i.e., only
a small set of models weights has large magnitudes. The second curve shows the equal error rate
(EER) of the task as a function of applying one iteration of the Guyon algorithm and retraining the
SVM. EER is defined as the value where the miss and false alarm rates are equal. All features with
|wi | below the value on the x-axis are discarded in the first iteration. From the figure, we see that
only a small fraction (< 5%) of the features are needed to obtain good error rates. This interesting
result provides motivation that a small subset of keywords are significant to the task.
1
0.2
0.15
CDF
0.5
0.1
EER
0.25
0 ?4
10
Equal Error Rate
CDF |wi|
0.75
0.05
?3
10
?2
10
Threshold
?1
10
00
10
Figure 1: Feature selection for a trigram language recognition task using Guyon?s method
4
3.2 Keywords via an alternating wrapper/filter method
The algorithm in Section 3.1 gives a method for n-gram selection for fixed n. Now, suppose we
want to find keywords for arbitrary n. One possible hypothesis for keyword selection is that since
higher order n-grams are discriminative, lower order n-grams in the keywords will also be discriminative. Therefore, it makes sense to finding distinguishing lower order n-grams and then construct
longer units from these. On the basis of this idea, we propose the following algorithm for keyword
construction:
Keyword Building Algorithm
? Start with an initial value of n = ns . Initialize the set, Sn? , to all possible n-grams of phones
including lower order grams. By default, let S1 be the set of all phones.
? (Wrapper Step) General n. Apply the feature selection algorithm in Section 3.1 to produce
a subset of distinguishing n-grams, Sn ? Sn? .
? (Filter Step) Construct a new set of (n + 1)-grams by juxtaposing elements from Sn with
?
phones. Nominally, we take this step to be juxtaposition on the right and left, Sn+1
=
{dp, qd|d ? Sn , p ? S1 , q ? S1 }.
? Iterate to the wrapper step.
? Output: Sn at some stopping n.
A few items should be noted about the proposed keyword building algorithm. First, we call the second feature selection process a filter step, since induction has not been applied to the (n + 1)-gram
features. Second, note that the purpose of the filter step is to provide a candidate set of possible
(n + 1)-grams which can then be more systematically reduced. Third, several potential algorithms
exist for the filter step. In our experiments and in the algorithm description, we nominally append
one phone to the beginning and end of an n-gram. Another possibility is to try to combine overlapping n-grams. For instance, suppose the keyword is some_people which has phone transcript
s_ah_m_p_iy_p_l. Then, if we are looking at 4-grams, we might see as top features s_ah_m_p and
p_iy_p_l and combine these to produce a new keyword.
3.3 Keyword Implementation
The expected n-gram counts were computed from lattices using the forward-backward algorithm. Equation (8) gives the posterior probability of a connected sequence of arcs in the lattice
where src_nd(a) and dst_nd(a) are the source and destination node of arc a, ?(a)is the likelihood
associated with arc a, ?(n) and ?(n) are the forward and backward probabilities of reaching node n
from the beginning or end of the lattice L respectively, and ?(L) is the total likelihood of the lattice
(the ?(?) of the final node or ?(?) of the initial node of the lattice).
p(aj , ..., aj+n ) =
?(src_nd(aj ))?(aj ) . . . ?(aj+n )?(dst_nd(aj+n ))
?(L)
(8)
Now if we define the posterior probability of a node p(n) as p(n) = (?(n)?(n))/?(L). Then
equation (8) becomes:
p(aj , ..., aj+n ) =
p(aj ) . . . p(aj+n )
.
p(src_nd(aj+1 )) . . . p(src_nd(aj+n ))
(9)
Equation (9) is attractive because it provides a way of computing the path posteriors locally using
only the individual arc and node posteriors along the path. We use this computation along with a
trie structure [15] to compute the posteriors of our keywords.
4 Experiments
4.1 Language Recognition Experimental Setup
The phone recognizer described in Section 2.1 was used to generate lattices across a train and an
evaluation data set. The training data set consists of more than 360 hours of telephone speech
5
spanning 13 different languages and coming from a variety of different sources including Callhome,
Callfriend and Fisher. The evaluation data set is the NIST 2005 Language Recognition Evaluation
data consisting of roughly 20,000 utterances (with duration of 30, 10 or 3 seconds depending on the
task) coming from three collection sources including Callfriend, Mixer and OHSU. We evaluated
our system for the 30 and 10 second task under the the NIST 2005 closed condition which limits
the evaluation data to 7 languages (English, Hindi, Japanese, Korean, Mandarin, Spanish and Tamil)
coming only from the OHSU data source.
The training and evaluation data was segmented using an automatic speech activity detector and
segments smaller than 0.5 seconds were thrown out. We also sub-segmented long audio files in the
training data to keep the duration of each utterance to around 5 minutes (a shorter duration would
have created too many training instances). Lattice arcs with posterior probabilities lower than 10?6
were removed and lattice expected counts smaller than 10?3 were ignored. The top and bottom
600 ranking keywords for each language were selected after each training iteration. The support
vector machine was trained using a kernel formulation which requires pre-computing all of the
kernel distances between the data points and using an alternate kernel which simply indexes into
the resulting distance matrix (this approach becomes difficult when the number of data points is too
large).
4.2 Language Recognition Results (Qualitative and Quantitative)
To get a sense of how well our keyword building algorithm was working, we looked at the top
ranking keywords from the English model only (since our phone recognizer is trained using the
English phone set). Table 1 summarizes a few of the more compelling phone 5-grams, and a possible
keyword that corresponds to each one. Not suprisingly, we noticed that in the list of top-ranking
n-grams there were many variations or partial n-gram matches to the same keyword, as well as
n-grams that didn?t correspond to any apparent keyword.
The equal error rates for our system on the NIST 2005 language recognition evaluation are summarized in Table 2. The 4-gram system gave a relative improvement of 12% on the 10 second task and
9% on the 30 second task, but despite the compelling keywords produced by the 5-gram system, the
performance actually degraded significantly compared to the 3-gram and 4-gram systems.
Table 1: Top ranking keywords for 5-gram SVM for English language recognition model
phones
Rank
keyword
SIL_Y_UW_N_OW
1
3
4
6
7
8
17
23
27
29
37
you know
<s> yeah
<s> ???
people
really
you know (var)
? like
like (var)
right </s>
have an
<s> well
!NULL_SIL_Y_EH_AX
!NULL_SIL_IY_M_TH
P_IY_P_AX_L
R_IY_L_IY_SIL
Y_UW_N_OW_OW
T_L_AY_K_SIL
L_AY_K_K_SIL
R_AY_T_SIL_!NULL
HH_AE_V_AX_N
!NULL_SIL_W_EH_L
Table 2: %EER for 10 and 30 second NIST language recognition tasks
N
1
2
3
4
5
10sec
30sec
25.3
18.3
16.5
07.4
11.3
04.3
10.0
03.9
13.6
05.6
6
4.3 Topic Recognition Experimental Setup
Topic recognition was performed using a subset of the phase I Fisher corpus (English) from LDC.
This corpus consists of 5, 851 telephone conversations. Participants were given instructions to discuss a topic for 10 minutes from 40 different possible topics. Topics included ?Education?, ?Hobbies,? ?Foreign Relations?, etc. Prompts were used to elicit discussion on the topics. An example
prompt is:
Movies: Do each of you enjoy going to the movies in a theater, or would you
rather rent a movie and stay home? What was the last movie that you saw? Was it
good or bad and why?
For our experiments, we used 2750 conversation sides for training. We also constructed development
and test sets of 1372 conversation sides each. The training set was used to find keywords and models
for topic detection.
4.4 Topic Recognition Results
We first looked at top ranking keywords for several topics; some results are shown in Table 3. We
can see that many keywords show a strong correspondence with the topic. Also, there are partial
keywords which correspond to what appears to be longer keywords, e.g. ?eh_t_s_ih_k? corresponds
to get sick.
As in the language recognition task, we used EER as the performance measure. Results in Table 4
show the performance for several n-gram orders. Performance improves going from 3-grams to 4grams. But, as with the language recognition task, we see a degradation in performance for 5-grams.
5 Conclusions and future work
We presented a method for automatic construction of keywords given a discriminative speech classification task. Our method was based upon successively building longer span keywords from shorter
span keywords using phones as a fundamental unit. The problem was cast as a feature selection
problem, and an alternating filter and wrapper algorithm was proposed. Results showed that reasonable keywords and improved performance could be achieved using this methodology.
Table 3: Top keyword for 5-gram SVM in Topic Recognition
Topic
Professional Sports on TV
Hypothetical: Time Travel
Affirmative Action
US Public Schools
Movies
Hobbies
September 11
Issues in the Middle East
Illness
Hypothetical: One Million Dollars to leave the US
Phones
Keyword
S_P_AO_R_T
sport
go back
[affirmat]ive act[ion]
schools
DVD
hobbies
happen
Israel
[g]et sick
you may
G_OW_B_AE_K
AX_V_AE_K_CH
S_K_UW_L_Z
IY_V_IY_D_IY
HH_OH_B_IY_Z
HH_AE_P_AX_N
IH_Z_R_IY_L
EH_T_S_IH_K
Y_UW_M_AY_Y
Table 4: Performance of Topic Detection for Different n-gram orders
n-gram order
3
4
5
EER (%)
10.22
8.95
9.40
7
Numerous possibilities exist for future work on this task. First, extension and experimentation on
other tasks such as dialect and speaker recognition would be interesting. The method has the potential for discovery of new interesting characteristics. Second, comparison of this method with other
feature selection methods may be appropriate [16]. A third area for extension is various technical
improvements. For instance, we might want to consider more general keyword models where skips
are allowed (or more general finite state transducers [17]). Also, alternate methods for the filter for
constructing higher order n-grams is a good area for exploration.
References
[1] W. M. Campbell, J. P. Campbell, D. A. Reynolds, D. A. Jones, and T. R. Leek, ?Phonetic
speaker recognition with support vector machines,? in Advances in Neural Information Processing Systems 16, Sebastian Thrun, Lawrence Saul, and Bernhard Sch?lkopf, Eds. MIT
Press, Cambridge, MA, 2003.
[2] W. M. Campbell, T. Gleason, J. Navratil, D. Reynolds, W. Shen, E. Singer, and P. TorresCarrasquillo, ?Advanced language recognition using cepstra and phonotactics: MITLL system
performance on the NIST 2005 language recognition evaluation,? in Proc. IEEE Odyssey,
2006.
[3] Bin Ma and Haizhou Li, ?A phonotactic-semantic paradigm for automatic spoken document
classification,? in The 28th Annual International ACM SIGIR Conference, Brazil, 2005.
[4] Lu-Feng Zhai, Man hung Siu, Xi Yang, and Herbert Gish, ?Discriminatively trained language
models using support vector machines for language identification,? in Proc. IEEE Odyssey:
The Speaker and Language Recognition Workshop, 2006.
[5] T. Joachims, Learning to Classify Text Using Support Vector Machines, Kluwer Academic
Publishers, 2002.
[6] W. M. Campbell, F. Richardson, and D. A. Reynolds, ?Language recognition with word lattices
and support vector machines,? in Proceedings of ICASSP, 2007, pp. IV?989 ? IV?992.
[7] Petr Schwarz, Matejka Pavel, and Jan Cernocky, ?Hierarchical structures of neural networks
for phoneme recognition,? in Proceedings of ICASSP, 2006, pp. 325?328.
[8] Linguistic Data Consortium, ?Switchboard-2 corpora,? http://www.ldc.upenn.edu.
[9] ?ICSI QuickNet,? http://www.icsi.berkeley.edu/Speech/qn.html.
[10] S. Young, Gunnar Evermann, Thomas Hain, D. Kershaw, Gareth Moore, J. Odell, D. Ollason,
V. Valtchev, and P. Woodland, The HTK book, Entropic, Ltd., Cambridge, UK, 2002.
[11] L. Rabiner and B.-H. Juang, Fundamentals of Speech Recognition, Prentice-Hall, 1993.
[12] Ronan Collobert and Samy Bengio, ?SVMTorch: Support vector machines for large-scale
regression problems,? Journal of Machine Learning Research, vol. 1, pp. 143?160, 2001.
[13] Avrim L. Blum and Pat Langley, ?Selection of relevant features and examples in machine
learning,? Artificial Intelligence, vol. 97, no. 1-2, pp. 245?271, Dec. 1997.
[14] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, ?Gene selection for cancer classification using
support vector machines,? Machine Learning, vol. 46, no. 1-3, pp. 389?422, 2002.
[15] Konrad Rieck and Pavel Laskov, ?Language models for detection of unknown attacks in network traffic,? Journal of Computer Virology, vol. 2, no. 4, pp. 243?256, 2007.
[16] Takaaki Hori, I. Lee Hetherington, Timothy J. Hazen, and James R. Glass, ?Open-vocabulary
spoken utterance retrieval using confusion neworks,? in Proceedings of ICASSP, 2007.
[17] C. Cortes, P. Haffner, and M. Mohri, ?Rational kernels,? in Advances in Neural Information
Processing Systems 15, S. Thrun S. Becker and K. Obermayer, Eds., Cambridge, MA, 2003,
pp. 601?608, MIT Press.
8
| 3272 |@word middle:1 bigram:2 retraining:1 open:1 instruction:1 gish:1 pavel:2 initial:2 wrapper:7 united:1 document:2 reynolds:3 contextual:1 ronan:1 happen:1 shape:1 sponsored:1 cue:1 prohibitive:1 selected:1 item:1 intelligence:1 indicative:1 beginning:2 ith:1 short:2 provides:3 node:6 attack:1 along:3 constructed:2 become:1 qualitative:3 consists:2 transducer:1 combine:2 upenn:1 expected:7 roughly:1 examine:2 automatically:4 increasing:1 becomes:2 didn:1 null:2 what:2 israel:1 string:2 affirmative:1 developed:1 spoken:2 lexington:1 finding:1 juxtaposing:1 temporal:1 quantitative:2 berkeley:1 hypothetical:2 act:1 tie:1 classifier:1 uk:1 unit:5 enjoy:1 producing:1 positive:1 understood:1 limit:2 despite:1 path:2 approximately:1 might:4 co:1 range:1 trie:1 unique:3 definite:1 jan:1 langley:1 area:2 elicit:1 significantly:1 word:6 eer:6 pre:1 consortium:1 get:2 selection:25 prentice:1 context:3 applying:2 www:2 map:1 straightforward:1 go:1 duration:3 typicality:1 sigir:1 shen:1 theater:1 dominate:1 variation:1 analogous:1 brazil:1 construction:4 suppose:5 target:1 us:4 distinguishing:2 hypothesis:3 samy:1 element:1 recognition:31 observed:1 bottom:1 probabili:1 connected:1 keyword:20 removed:1 icsi:3 discriminates:1 ideally:1 dynamic:1 trained:4 segment:8 distinctive:2 upon:3 basis:2 easily:1 joint:2 icassp:3 various:1 dialect:3 train:3 artificial:2 mixer:1 quite:1 apparent:1 solve:1 ive:1 say:1 squash:1 grammar:1 richardson:2 itself:1 final:1 sequence:11 advantage:1 propose:4 product:2 coming:3 relevant:1 hazen:1 achieve:1 lincoln:1 intuitive:1 description:1 juang:1 produce:10 leave:1 object:1 derive:1 depending:1 mandarin:1 school:2 keywords:32 transcript:2 strong:3 predicted:1 skip:1 indicate:1 qd:1 filter:8 exploration:1 viewing:1 opinion:1 public:1 education:1 bin:1 require:1 government:1 odyssey:2 really:1 alleviate:1 neworks:1 extension:3 around:1 stt:3 hall:1 wcampbell:1 lawrence:1 viterbi:1 trigram:2 dictionary:1 adopt:1 entropic:1 purpose:1 recognizer:6 proc:2 travel:1 bag:1 saw:1 schwarz:1 create:2 suprisingly:1 weighted:1 mit:4 gaussian:1 reaching:1 rather:1 varying:1 linguistic:2 encode:1 focus:2 joachim:1 improvement:2 rank:3 likelihood:4 detect:1 sense:2 hobby:3 dollar:1 glass:1 stopping:1 foreign:1 entire:1 eliminate:1 relation:1 going:2 issue:1 classification:7 html:1 priori:1 proposes:1 development:1 initialize:1 equal:4 construct:4 extraction:2 look:2 jones:1 future:2 quantitatively:1 few:2 composed:1 individual:1 phase:1 consisting:1 attempt:1 thrown:1 detection:5 interest:1 possibility:2 evaluation:8 mixture:1 accurate:1 partial:2 shorter:2 iv:2 desired:1 monophone:1 instance:7 yeah:1 modeling:2 compelling:2 classify:1 lattice:20 phrase:1 applicability:1 entry:5 subset:3 predictor:1 siu:1 too:2 characterize:1 combined:1 density:1 fundamental:2 international:1 stay:1 contract:1 destination:1 lee:1 picking:1 w1:2 successively:3 book:1 li:1 potential:2 summarized:1 sec:2 ranking:6 stream:1 collobert:1 performed:3 view:1 try:1 closed:1 traffic:1 start:2 participant:1 air:1 formed:1 accuracy:1 degraded:1 phoneme:1 characteristic:5 efficiently:1 correspond:3 identify:1 rabiner:1 lkopf:1 identification:1 iterated:1 produced:2 lu:1 anns:1 detector:4 explain:1 barnhill:1 sebastian:1 difficultly:1 ed:2 frequency:2 pp:7 james:1 associated:1 di:9 rational:1 gleason:1 conversation:4 improves:1 cj:4 actually:1 campbell:5 back:1 appears:1 higher:4 htk:2 methodology:1 improved:1 formulation:1 evaluated:1 working:1 overlapping:1 petr:1 french:1 aj:12 grows:2 name:1 building:5 hypothesized:1 consisted:1 effect:1 normalized:2 alternating:3 laboratory:1 phonotactic:2 moore:1 semantic:1 attractive:1 kershaw:1 ll:1 konrad:1 spanish:2 speaker:5 noted:1 outline:1 confusion:1 l1:3 novel:1 common:2 exponentially:2 million:1 extend:1 interpretation:2 illness:1 kluwer:1 significant:5 cambridge:3 automatic:5 language:33 dj:13 toolkit:1 longer:6 similarity:2 gj:7 etc:2 sick:2 posterior:7 showed:1 phone:27 phonetic:1 certain:1 accomplished:1 scoring:2 herbert:1 paradigm:2 monotonically:1 odell:1 multiple:2 desirable:1 segmented:2 technical:1 match:1 academic:1 long:4 retrieval:1 basic:5 regression:1 iteration:3 kernel:17 normalization:2 achieved:1 cell:1 ion:1 dec:1 want:2 interval:1 source:4 publisher:1 sch:1 w2:5 file:1 gmms:1 call:2 extracting:1 presence:1 yang:1 feedforward:1 split:1 bengio:1 wn:3 iterate:1 variety:1 gave:1 w3:1 architecture:3 inner:2 reduce:1 idea:2 haffner:1 ltd:1 becker:1 monophones:3 speech:21 action:1 ignored:1 useful:1 woodland:1 involve:1 endorsed:1 matejka:1 extensively:1 locally:1 svms:9 category:1 reduced:1 generate:1 http:2 exist:2 per:1 vol:4 group:1 dj2:1 gunnar:1 threshold:2 blum:1 backward:3 fraction:1 convert:1 sum:2 package:2 inverse:1 you:7 powerful:1 guyon:7 reasonable:1 home:1 decision:4 summarizes:1 scaling:1 laskov:1 correspondence:1 annual:1 activity:1 occur:1 dvd:1 cepstra:1 min:1 span:2 conversational:1 department:1 tv:1 according:1 alternate:3 describes:1 across:4 brno:1 smaller:2 wi:4 s1:3 intuitively:2 hori:1 equation:3 discus:1 count:16 leek:1 needed:1 know:3 singer:1 end:2 available:1 experimentation:1 apply:2 hierarchical:1 spectral:1 appropriate:1 occurrence:2 professional:1 thomas:1 top:7 remaining:1 hain:1 build:2 feng:1 noticed:1 already:1 looked:3 strategy:1 obermayer:1 september:1 dp:1 distance:2 mapped:1 thrun:2 hmm:3 topic:20 spanning:1 induction:1 length:1 modeled:2 index:1 zhai:1 ratio:1 difficult:2 unfortunately:1 setup:2 korean:1 append:1 suppress:1 design:1 implementation:1 proper:1 unknown:1 discarded:1 arc:5 nist:5 tamil:1 finite:1 cernocky:1 virology:1 pat:1 looking:1 arbitrary:1 prompt:2 cast:2 optimized:1 discriminator:1 security:1 homeland:1 acoustic:2 ollason:1 hour:2 below:1 wi2:1 including:4 ldc:2 critical:1 natural:2 force:1 difficulty:1 indicator:3 hindi:1 advanced:1 movie:5 numerous:1 axis:1 created:2 utterance:5 sn:7 text:3 prior:1 review:2 l2:3 discovery:1 determining:3 relative:1 discriminatively:2 interesting:5 generation:1 versus:1 var:2 degree:1 switchboard:2 mercer:1 systematically:1 squashing:2 normalizes:1 cancer:1 mohri:1 token:15 last:1 english:8 silence:1 side:2 saul:1 sparse:1 curve:2 calculated:2 default:1 gram:53 cumulative:1 vocabulary:1 qn:1 author:1 made:1 qualitatively:1 forward:3 collection:1 bernhard:1 keep:1 gene:1 corpus:4 discriminative:8 xi:4 alternatively:2 iterative:1 juxtaposition:2 why:2 table:8 confirmation:1 necessarily:1 japanese:1 constructing:2 stc:1 valtchev:1 motivation:2 alarm:1 allowed:1 n:1 sub:1 candidate:2 rent:1 weighting:2 third:2 young:1 minute:2 bad:1 list:1 svm:13 cortes:1 grouping:1 incorporating:1 workshop:1 false:1 avrim:1 vapnik:1 magnitude:3 linearization:1 suited:1 timothy:1 simply:1 insures:1 sport:2 recommendation:1 nominally:2 corresponds:2 acm:1 ma:4 cdf:4 gareth:1 weston:1 rieck:1 goal:1 viewed:1 ann:1 towards:1 fisher:2 man:1 included:1 typical:3 telephone:2 wt:1 miss:1 degradation:1 total:1 discriminate:1 experimental:2 east:1 hetherington:1 ew:1 select:1 support:10 people:1 latter:1 svmtorch:2 ohsu:2 audio:1 hung:1 |
2,506 | 3,273 | An in-silico Neural Model of Dynamic Routing
through Neuronal Coherence
Devarajan Sridharan?? , Brian Percival?? , John Arthur\ and Kwabena Boahen\
?
Program in Neurosciences,
?
Department of Electrical Engineering
and \ Department of Bioengineering
Stanford University
?
These authors contributed equally
{dsridhar, bperci, jarthur, boahen}@stanford.edu
Abstract
We describe a neurobiologically plausible model to implement dynamic routing
using the concept of neuronal communication through neuronal coherence. The
model has a three-tier architecture: a raw input tier, a routing control tier, and an
invariant output tier. The correct mapping between input and output tiers is realized by an appropriate alignment of the phases of their respective background
oscillations by the routing control units. We present an example architecture, implemented on a neuromorphic chip, that is able to achieve circular-shift invariance.
A simple extension to our model can accomplish circular-shift dynamic routing
with only O(N ) connections, compared to O(N 2 ) connections required by traditional models.
1
Dynamic Routing Circuit Models for Circular-Shift Invariance
Dynamic routing circuit models are among the most prominent neural models for invariant recognition [1] (also see [2] for review). These models implement shift invariance by dynamically changing
spatial connectivity to transform an object to a standard position or orientation. The connectivity
between the raw input and invariant output layers is controlled by routing units, which turn certain
subsets of connections on or off (Figure 1A). An important feature of this model is the explicit representation of what and where information in the main network and the routing units, respectively;
the routing units use the where information to create invariant representations.
Traditional solutions for shift invariance are neurobiologically implausible for at least two reasons.
First, there are too many synaptic connections: for N input neurons, N output neurons and N
possible input-output mappings, the network requires O(N 2 ) connections in the routing layer?
between each of the N routing units and each set of N connections that that routing unit gates (Figure
1A). Second, these connections must be extremely precise: each routing unit must activate an inputoutput mapping (N individual connections) corresponding to the desired shift (as highlighted in
Figure 1A). Other approaches that have been proposed, including invariant feature networks [3,4],
also suffer from significant drawbacks, such as the inability to explicitly represent where information
[2]. It remains an open question how biology could achieve shift invariance without profligate and
precise connections.
In this article, we propose a simple solution for shift invariance for quantities that are circular or
periodic in nature?circular-shift invariance (CSI)?orientation invariance in vision and key invariance in music. The visual system may create orientation-invariant representations to aid recognition
under conditions of object rotation or head-tilt [5,6]; a similar mechanism could be employed by
the auditory system to create key-invariant representations under conditions where the same melody
1
Figure 1: Dynamic routing. A In traditional dynamic routing, connections from the (raw) input layer
to the (invariant) output layer are gated by routing units. For instance, the mapping from A to 5, B to
6, . . . , F to 4 is achieved by turning on the highlighted routing unit. B In time-division multiplexing
(TDM), the encoder samples input channels periodically (using a rotating switch) while the decoder
sends each sample to the appropriate output channel (based on its time bin). TDM can be extended to
achieve a circular-shift transformation by altering the angle between encoder and decoder switches
(?), thereby creating a rotated mapping between input and output channels (adapted from [7]).
is played in different keys. Similar to orientation, which is a periodic quantity, musical notes one
octave apart sound alike, a phenomenon known as octave equivalence [8]. Thus, the problems of
key invariance and orientation invariance admit similar solutions.
Deriving inspiration from time-division multiplexing (TDM), we propose a neural network for CSI
that uses phase to encode and decode information. We modulate the temporal window of communication between (raw) input and (invariant) output neurons to achieve the appropriate input?output
mapping. Extending TDM, any particular circular-shift transformation can be accomplished by
changing the relative angle, ?, between the rotating switches of the encoder (that encodes the raw
input in time) and decoder (that decodes the invariant output in time) (Figure 1B). This obviates the
need to hardwire routing control units that specifically modulate the strength of each possible inputoutput connection, thereby significantly reducing the complexity inherent in the traditional dynamic
routing solution. Similarly, a remapping between the input and output neurons can be achieved by
introducing a relative phase-shift in their background oscillations.
2
Dynamic Routing through Neuronal Coherence
To modulate the temporal window of communication, the model uses a ring of neurons (the oscillation ring) to select the pool of neurons (in the projection ring) that encode or decode information at a
particular time (Figure 2A). Each projection pool encodes a specific value of the feature (for example, one of twelve musical notes). Upon activation by external input, each pool is active only when
background inhibition generated by the oscillation ring (outer ring of neurons) is at a minimum. In
addition to exciting 12 inhibitory interneurons in the projection ring, each oscillation ring neuron
excites its nearest 18 neighbors in the clockwise direction around the oscillation ring. As a result, a
wave of inhibition travels around the projection ring that allows only one pool to be excitable at any
point in time. These neurons become excitable at roughly the same time (numbered sectors, inner
ring) by virtue of recurrent excitatory intra-pool connections.
Decoding is accomplished by a second tier of rings (Figure 2B). The projection ring of the first (input) tier connects all-to-all to the projection ring of the second (output) tier. The two oscillation rings
create a window of excitability for the pools of neurons in their respective projection rings. Hence,
the most effective communication occurs between input and output pools that become excitable at
the same time (i.e. are oscillating in phase with one another [9]).
The CSI problem is solved by introducing a phase-shift between the input and output tiers. If they
are exactly in phase, then an input pool is simply mapped to the output pool directly above it. If their
2
Figure 2: Double-Ring Network for Encoding and Decoding. A The projection (inner) ring is
divided into (numbered) pools. The oscillation (outer) ring modulates sub-threshold activity (waveforms) of the projection ring by exciting (black distribution) inhibitory neurons that inhibit neighboring projection neurons. A wave of activity travels around the oscillation ring due to asymmetric
excitatory connections, creating a corresponding wave of inhibitory activity in the projection ring,
such that only one pool of projection neurons is excitable (spikes) at a given time. B Two instances
of the double-ring structure from A. The input projection ring connects all-to-all to the output projection ring (dashed lines). Because each input pool will spike only during a distinct time bin, and
each output pool is excitable only in a certain time bin, communication occurs between input and
output pools that are oscillating in phase with each other. Appropriate phase offset between input
and output oscillation rings realizes the desired circular shift (input pool H to output pool 1, solid
arrow). C Interactions among pools highlighted in B.
phases are different, the input is dynamically routed to an appropriate circularly shifted position in
the output tier. Such changes in phase are analogous to adjusting the angle of the rotating switch
at either the encoder or the decoder in TDM (see Figure 1B). There is some evidence that neural
systems could employ phase relationships of subthreshold oscillations to selectively target neural
populations [9-11].
3
Implementation in Silicon
We implemented this solution to CSI on a neuromorphic silicon chip [12]. The neuromorphic chip
has neurons whose properties resemble that of biological neurons; these neurons even have intrinsic differences, thereby mimicking heterogeneity in real neurobiological systems. The chip uses a
conductance-based spiking model for both inhibitory and excitatory neurons. Inhibitory neurons
project to nearby excitatory and inhibitory neurons via a diffusor network that determines the spread
of inhibition. A lookup table of excitatory synaptic connectivity is stored in a separate randomaccess memory (RAM) chip. Spikes occurring on-chip are converted to a neuron address, mapped
to synapses (if any) via the lookup table, and routed to the targeted on-chip synapse. A universal
serial bus (USB) interface chip communicates spikes to and from a computer, for external input and
3
Figure 3: Traveling-wave activity in the oscillation ring. A Population activity (5ms bins) of a pool
of eighteen (adjacent) oscillation neurons. B Increasing the strength of feedforward excitation led
to increasing frequencies of periodic firing in the ? and ? range (1-10 Hz). Strength of excitation
is the amplitude change in post-synaptic conductance due to a single pre-synaptic spike (measured
relative to minimum amplitude used).
data analysis, respectively. Simulations on the chip occur in real-time, making it an attractive option
for implementing the model.
We configured the following parameters:
? Magnitude of a potassium M-current: increasing this current?s magnitude increased the
post-spike repolarization time of the membrane potential, thereby constraining spiking to a
single time bin per cycle.
? The strength of excitatory and inhibitory synapses: a correct balance had to be established
between excitation and inhibition to make only a small subset of neurons in the projection
rings fire at a time?too much excitation led to widespread firing and too much inhibition
led to neurons that were entirely silent or fired sporadically.
? The space constant of inhibitory spread: increasing the spread was effective in preventing
runaway excitation, which could occur due to the recurrent excitatory connections.
We were able to create a stable traveling wave of background activity within the oscillation ring.
We transiently stimulated a small subset of the neurons, which initiated a wave of activity that
propagated in a stable manner around the ring after the transient external stimulation had ceased
(Figure 3A). The network frequency determined from a Fourier transform of the network activity
smoothed with a non-causal Gaussian kernel (FDHM = 80ms) was 7.4Hz. The frequency varied
with the strength of the neurons? excitatory connections (Figure 3B), measured as the amplitude of
the step increase in membrane conductivity due to the arrival of a pre-synaptic spike. Over much
of the range of the synaptic strengths tested, we observed stable oscillations in the ? and ? bands
(1-10Hz); the frequency appeared to increase logarithmically with synaptic strength.
4
Phase-based Encoding and Decoding
In order to assess the best-case performance of the model, the background activity in the input and
output projection rings was derived from the input oscillation ring. Their spikes were delivered to
the appropriately circularly-shifted output oscillation neurons. The asymmetric feedforward connections were disabled in the output oscillation ring. For instance, in order to achieve a circular shift
by k pools (i.e. mapping input projection pool 1 to output projection pool k + 1, input pool 2 to
output pool k + 2, and so on), activity from the input oscillation neurons closest to input pool 1 was
fed into the output oscillation neurons closest to output pool k. By providing the appropriate phase
difference between input and output oscillation, we were able to assess the performance of the model
under ideal conditions. In the Discussion section, we discuss a biologically plausible mechanism to
control the relative phases.
4
Figure 4: Phase-based encoding. Rasters indicating activity of projection pools in 1ms bins, and
mean phase of firing (??s) for each pool (relative to arbitrary zero time). The abscissa shows firing
time normalized by the period of oscillation (which may be converted to firing phase by multiplication by 2?). Under constant input to the input projection ring, the input pools fire approximately in
sequence. Two cycles of pool activity normalized by maximum firing rate for each pool are shown
in left inset (for clarity, pools 1-6 are shown in the top panel and pools 7-12 are shown separately
in the bottom panel); phase of background inhibition of pool 4 is shown (below) for reference.
Phase-aligned average1 of activity (right inset) showed that the firing times were relatively tight and
uniform across pools: a standard deviation of 0.0945 periods, or equivalently, a spread of 1.135
pools at any instant of time.
We verified that the input projection pools fired in a phase-shifted fashion relative to one another,
a property critical for accurate encoding (see Figure 2). We stimulated all pools in the input projection ring simultaneously while the input oscillation ring provided a periodic wave of background
inhibition. The mean phase of firing for each pool (relative to arbitrary zero time) increased nearly
linearly with pool number, thereby providing evidence for accurate, phase-based encoding (Figure
4). The firing times of all pools are shown for two cycles of background oscillatory activity (Figure 4
left inset). A phase-aligned average1 showed that the timing was relatively tight (standard deviation
1.135 pools) and uniform across pools of neurons (Figure 4 right inset).
We then characterized the system?s ability to correctly decode this encoding under a given circular
shift. The shift was set to seven pools, mapping input pool 1 to output pool 8, and so on. Each input
pool was stimulated in turn. We expected to see only the appropriately shifted output pool become
highly active. In fact, not only was this pool active, but other pools around it were also active,
though to a lesser extent (Figure 5A). Thus, the phase-encoded input was decoded successfully, and
circularly shifted, except that the output units were broadly tuned.
To quantify the overall precision of encoding and decoding, we constructed an input-locked average of the tuning curves (Figure 5B): the curves were circularly shifted to the left by an amount
corresponding to the stimulated input pool number, and the raw pool firing rates were averaged. If
the phase-based encoding and decoding were perfect, the peak should occur at a shift of 7 pools.
1
The phase-aligned average was constructed by shifting the pool-activity curves by the (# of the pool) ?
1
of the period) to align activity across pools, which was then averaged.
( 12
5
Figure 5: Decoding phase-encoded input. A In order to assess decoding performance under a given
circular shift (here 7 pools) each input pool was stimulated in turn and activity in each output pool
was recorded and averaged over 500ms. The pool?s response, normalized by its maximum firing
rate, is plotted for each stimulated input pool (arrows pointing to curves, color code as in Figure 4).
Each input pool stimulation trial consistently resulted in peak activity in the appropriate output pool;
however, adjacent pools were also active, but to a lesser extent, resulting in a broad tuning curve. B
The best-fit Gaussian (dot-dashed grey curve, ? = 2.30 pools) to the input-locked average of the raw
pool firing rates (see text for details) revealed a maximum between a shift of 7 and 8 pools (inverted
grey triangle; expected peak at a shift of 7 pools).
Indeed, the highest (average) firing rate corresponded to a shift of 7 pools. However, the activity
corresponding to a shift of 8 pools was nearly equal to that of 7 pools, and the best fitting Gaussian curve to the activity histogram (grey dot-dashed line) peaked at a point between pools 7 and 8
(inverted grey triangle). The standard deviation (?) was 2.30 pools, versus the expected ideal ? of
1.60, which corresponds to the encoding distribution (? = 1.135 pools) convolved with itself.
5
Discussion
We have demonstrated a biologically plausible mechanism for the dynamic routing of information
in time that obviates the need for precise gating of connections. This mechanism requires that a
wave of activity propagate around pools of neurons arranged in a ring. While previous work has
described traveling waves in a ring of neurons [13], and a double ring architecture (for determining
head-direction) [14], our work combines these two features (twin rings with phase-shifted traveling
waves) to achieve dynamic routing. These features of the model are found in the cortex: Bonhoeffer
and Grinwald [15] describe iso-orientation columns in the cat visual cortex that are arranged in
ring-like pinwheel patterns, with orientation tuning changing gradually around the pinwheel center.
Moreover, Rubino et al. [16] have shown that coherent oscillations can propagate as waves across
the cortical surface in the motor cortex of awake, behaving monkeys performing a delayed reaching
task.
Our solution for CSI is also applicable to music perception. In the Western twelve-tone, equaltemperament tuning system (12-tone scale), each octave is divided into twelve logarithmicallyspaced notes. Human observers are known to construct mental representations for raw notes that
are invariant of the (perceived) key of the music: a note of C heard in the key of C-Major is perceptually equivalent to the note C# heard in the key of C#-Major [8,17]. In previous dynamic routing
models of key invariance, the tonic?the first note of the key (e.g., C is the tonic of C-Major)?
supplies the equivalent where information used by routing units that gate precise connections to
map the raw note into a key-invariant output representation [17].
To achieve key invariance in our model, the bottom tier encodes raw note information while the top
tier decodes key-invariant notes (Figure 6). The middle tier receives the tonic information and aligns
the phase of the first output pool (whose invariant representation corresponds to the tonic) with the
appropriate input pool (whose raw note representation corresponds to the tonic of the perceived key).
6
Figure 6: Phase-based dynamic routing to achieve key-invariance. The input (bottom) tier encodes
raw note information, and the output (top) tier decodes key-invariant information. The routing
(middle) tier sets the phase of the background wave activity in the input and output oscillation rings
(dashed arrows) such that the first output pool is in phase with the input pool representing the note
corresponding to the tonic. On the left, where G is the tonic, input pool G, output pool 1, and the
routing tier are in phase with one another (black clocks), while input pool C and output pool 6 are in
phase with one another (grey clocks). Thus, the raw note input, G, activates the invariant output 1,
which corresponds to the perceived tonic invariant representation (heavy solid arrows). On the right,
the same raw input note, G, is active, but the key is different and A is now the active tonic; thus the
raw input, G, is now mapped to output pool 11.
The tonic information is supplied to a specific pool in the routing ring according to the perceived
key. This pool projects directly down to the input pool corresponding to the tonic. This ensures
that the current tonic?s input pool is excitable in the same time bin as the first output pool. Each
of the remaining raw input notes of the octave is mapped by time binning to the corresponding
key-invariant representation in the output tier, as the phases of input pools are all shifted by the
same amount. Supporting evidence for phase-based encoding of note information comes from MEG
recordings in humans: the phase of the MEG signal (predominantly over right hemispheric sensor
locations) tracks the note of the heard note sequence with surprising accuracy [18].
The input and output tiers? periods must be kept in lock-step, which can be accomplished through
more plausible means than employed in the current implementation of this model. Here, we maintained a fixed phase shift between the input and output oscillation rings by feeding activity from the
input oscillation ring to the appropriately shifted pool in the output oscillation ring. This approach
allowed us to avoid difficulties achieving coherent oscillations at identical frequencies in the input
and output oscillation rings. Alternatively, entrainment could be achieved even when the frequencies
are not identical?a more biologically plausible scenario?if the routing ring resets the phase of the
input and output rings on a cycle-by-cycle basis. Lakatos et al. [19] have shown that somatosensory inputs can reset the phase of ongoing neuronal oscillations in the primary auditory cortex (A1),
which helps in the generation of a unified auditory-tactile percept (the so-called ?Hearing-Hands
Effect?).
A simple extension to our model can reduce the number of connections below the requirements of
traditional dynamic routing models. Instead of having all-to-all connections between the input and
output layers, a relay layer of very few (M ? N ) neurons could be used to transmit the spikes
form the input neurons to the output neurons (analogous to the single wire connecting encoder and
decoder in Figure 1B). A small number of (or ideally even one) relay neurons suffices because
encoding and decoding occur in time. Hence, the connections between each input pool and the
relay neurons require O(M N ) ? O(N ) connections (as long as M does not scale with N ) and
those between the relay neurons and each output pool require O(M N) ? O(N ) connections as well.
Thus, by removing all-to-all connectivity between the input and output units (a standard feature in
traditional dynamic routing models), the number of required connections is reduced from O(N 2 )
7
to O(N ). Further, by replacing the strict pool boundaries with nearest neighbor connectivity in the
projection rings, the proposed model can accommodate a continuum of rotation angles.
In summary, we propose that the mechanism of dynamic routing through neuronal coherence could
be a general mechanism that could be used by multiple sensory and motor modalities in the neocortex: it is particularly suitable for placing raw information in an appropriate context (defined by
the routing tier).
Acknowledgments
DS was supported by a Stanford Graduate Fellowship and BP was supported under a National Science Foundation Graduate Research Fellowship.
References
[1] Olshausen B.A., Anderson C.H. & Van Essen D.C. (1993). A neurobiological model of visual attention and
invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience 13(11):47004719.
[2] Wiskott L. (2004). How does our visual system achieve shift and size invariance? In J.L. van Hemmen &
T.J. Sejnowski (Eds.), 23 Problems in Systems Neuroscience, Oxford University Press.
[3] Fukushima K., Miyake S. & Ito T. (1983). A neural network model for a mechanism of visual pattern
recognition. IEEE Transactions on Systems, Man and Cybernetics 13:826-834.
[4] Mel B.W., Ruderman D.L & Archie K.A. (1998). Translation invariant orientation tuning in visual ?complex? cells could derive from intradendritic computations. Journal of Neuroscience 18(11):4325-4334.
[5] McKone, E. & Grenfell, T. (1999). Orientation invariance in naming rotated objects: Individual differences
and repetition priming. Perception and Psychophysics, 61:1590-1603.
[6] Harris IM & Dux PE. (2005). Orientation-invariant object recognition: evidence from repetition blindness.
Cognition, 95(1):73-93.
[7] Naval Electrical Engineering Training Series (NEETS). Module 17, Radio-Frequency Communication Principles, Chapter 3, pp.32. Published online at http://www.tpub.com/content/neets/14189 (Integrated Publishing).
[8] Krumhansl C.L. (1990). Cognitive foundations of musical pitch. Oxford University Press, 1990.
[9] Fries P. (2005). A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in Cognitive Sciences 9(10):474-480.
[10] Buzsaki G. & Draguhn A. (2004). Neuronal Oscillations in Cortical Networks. Science 304(5679):19261929.
[11] Sejnowski T.J. & Paulsen O. (2006). Network oscillations: Emerging computational principles. Journal
of Neuroscience 26(6):1673-1676.
[12] Arthur J.A. & Boahen K. (2005). Learning in Silicon: Timing is Everything. Advances in Neural Information Processing Systems 17, B Sholkopf and Y Weiss, Eds, MIT Press, 2006.
[13] Hahnloser R.H.R., Sarpeshkar R., Mahowald M.A., Douglas R.J., & Seung H.S. (2000). Digital selection
and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405:947-951.
[14] Xie X., Hahnloser R.H.R., & Seung H.S (2002). Double-ring network modeling of the head-direction
system. Phys. Rev. E66 041902:1-9.
[15] Bonhoeffer K. & Grinwald A. (1991). Iso-orientation domains in cat visual cortex are arranged in
pinwheel-like patterns. Nature 353:426-437.
[16] Rubino D., Robbins K.A. & Hastopoulos N.G. (2006). Propagating waves mediate information transfer in
the motor cortex. Nature Neuroscience 9:1549-1557.
[17] Bharucha J.J. (1999). Neural nets, temporal composites and tonality. In D. Deutsch (Ed.), The Psychology
of Music (2d Ed.) Academic Press, New York.
[18] Patel A.D. & Balaban E. (2000). Temporal patterns of human cortical activity reflect tone sequence
structure. Nature 404:80-84.
[19] Lakatos P., Chen C., O?Connell M., Mills A. & Schroeder C. (2007). Neuronal oscillations and multisensory interaction in primary auditory cortex. Neuron 53(2):279-292.
8
| 3273 |@word trial:1 blindness:1 middle:2 open:1 grey:5 simulation:1 propagate:2 paulsen:1 thereby:5 solid:2 accommodate:1 series:1 tuned:1 current:4 com:1 surprising:1 activation:1 must:3 john:1 periodically:1 motor:3 tone:3 iso:2 mental:1 location:1 constructed:2 become:3 supply:1 fitting:1 combine:1 manner:1 indeed:1 expected:3 roughly:1 abscissa:1 inspired:1 dux:1 window:3 increasing:4 project:2 provided:1 moreover:1 circuit:3 remapping:1 panel:2 what:1 monkey:1 emerging:1 unified:1 transformation:2 temporal:4 exactly:1 control:4 unit:13 conductivity:1 engineering:2 timing:2 encoding:11 oxford:2 initiated:1 firing:13 approximately:1 black:2 dynamically:2 equivalence:1 range:2 locked:2 averaged:3 graduate:2 acknowledgment:1 implement:2 universal:1 significantly:1 composite:1 projection:23 pre:2 numbered:2 lakatos:2 selection:1 coexist:1 context:1 silico:1 www:1 equivalent:2 map:1 demonstrated:1 center:1 attention:1 miyake:1 deriving:1 population:2 analogous:2 transmit:1 target:1 decode:3 us:3 logarithmically:1 trend:1 recognition:5 particularly:1 neurobiologically:2 asymmetric:2 binning:1 observed:1 bottom:3 module:1 electrical:2 solved:1 ensures:1 cycle:5 inhibit:1 highest:1 csi:5 boahen:3 complexity:1 ideally:1 seung:2 dynamic:18 tight:2 upon:1 division:2 basis:1 triangle:2 chip:9 cat:2 chapter:1 sarpeshkar:1 distinct:1 describe:2 activate:1 effective:2 sejnowski:2 corresponded:1 whose:3 encoded:2 stanford:3 plausible:5 encoder:5 ability:1 transform:2 highlighted:3 itself:1 delivered:1 online:1 sequence:3 net:1 propose:3 interaction:2 reset:2 neighboring:1 aligned:3 fired:2 achieve:9 buzsaki:1 amplification:1 inputoutput:2 ceased:1 potassium:1 double:4 requirement:1 extending:1 oscillating:2 perfect:1 ring:50 rotated:2 object:4 help:1 derive:1 recurrent:2 propagating:1 measured:2 nearest:2 excites:1 implemented:2 resemble:1 come:1 somatosensory:1 quantify:1 deutsch:1 direction:3 waveform:1 drawback:1 correct:2 human:3 routing:35 transient:1 runaway:1 melody:1 bin:7 implementing:1 require:2 everything:1 feeding:1 suffices:1 brian:1 biological:1 im:1 extension:2 around:7 mapping:8 cognition:1 pointing:1 major:3 continuum:1 relay:4 perceived:4 travel:2 applicable:1 realizes:1 radio:1 tonality:1 robbins:1 repetition:2 create:5 successfully:1 mit:1 activates:1 gaussian:3 sensor:1 reaching:1 avoid:1 encode:2 derived:1 naval:1 consistently:1 integrated:1 mimicking:1 overall:1 among:2 orientation:11 spatial:1 psychophysics:1 equal:1 construct:1 having:1 kwabena:1 biology:1 broad:1 identical:2 placing:1 nearly:2 peaked:1 transiently:1 inherent:1 employ:1 few:1 simultaneously:1 resulted:1 national:1 individual:2 delayed:1 phase:40 connects:2 fire:2 fukushima:1 conductance:2 interneurons:1 circular:11 highly:1 intra:1 essen:1 alignment:1 accurate:2 bioengineering:1 arthur:2 respective:2 rotating:3 desired:2 plotted:1 causal:1 instance:3 increased:2 column:1 modeling:1 altering:1 neuromorphic:3 mahowald:1 introducing:2 deviation:3 subset:3 hearing:1 uniform:2 archie:1 too:3 stored:1 krumhansl:1 periodic:4 accomplish:1 twelve:3 peak:3 off:1 decoding:8 pool:93 connecting:1 connectivity:5 reflect:1 recorded:1 admit:1 creating:2 external:3 cognitive:3 converted:2 potential:1 lookup:2 twin:1 configured:1 explicitly:1 observer:1 sporadically:1 wave:13 option:1 ass:3 accuracy:1 musical:3 percept:1 subthreshold:1 raw:17 decodes:3 intradendritic:1 usb:1 cybernetics:1 published:1 oscillatory:1 implausible:1 synapsis:2 phys:1 synaptic:7 aligns:1 ed:4 raster:1 frequency:7 pp:1 propagated:1 auditory:4 adjusting:1 color:1 amplitude:3 xie:1 response:1 wei:1 synapse:1 arranged:3 though:1 anderson:1 clock:2 traveling:4 hand:1 receives:1 d:1 replacing:1 ruderman:1 western:1 widespread:1 disabled:1 olshausen:1 effect:1 concept:1 normalized:3 hence:2 inspiration:1 excitability:1 attractive:1 adjacent:2 during:1 maintained:1 excitation:5 mel:1 m:4 hemispheric:1 prominent:1 octave:4 interface:1 predominantly:1 rotation:2 stimulation:2 spiking:2 tilt:1 significant:1 silicon:4 tuning:5 similarly:1 had:2 dot:2 stable:3 cortex:8 surface:1 inhibition:7 behaving:1 align:1 closest:2 showed:2 apart:1 scenario:1 certain:2 tdm:5 accomplished:3 inverted:2 minimum:2 employed:2 period:4 clockwise:1 dashed:4 signal:1 multiple:1 sound:1 characterized:1 academic:1 long:1 divided:2 naming:1 serial:1 equally:1 post:2 a1:1 controlled:1 pitch:1 devarajan:1 vision:1 histogram:1 represent:1 kernel:1 achieved:3 cell:1 background:9 addition:1 separately:1 fellowship:2 sends:1 modality:1 appropriately:3 strict:1 hz:3 recording:1 sridharan:1 ideal:2 feedforward:2 constraining:1 revealed:1 switch:4 fit:1 psychology:1 architecture:3 silent:1 inner:2 reduce:1 lesser:2 shift:25 suffer:1 routed:2 tactile:1 york:1 heard:3 amount:2 neocortex:1 band:1 reduced:1 http:1 supplied:1 repolarization:1 inhibitory:8 shifted:9 neuroscience:6 per:1 correctly:1 track:1 broadly:1 key:18 threshold:1 achieving:1 changing:3 clarity:1 douglas:1 verified:1 kept:1 ram:1 angle:4 oscillation:34 coherence:5 entirely:1 layer:6 played:1 schroeder:1 activity:24 adapted:1 strength:7 occur:4 awake:1 bp:1 multiplexing:2 encodes:4 nearby:1 fourier:1 extremely:1 connell:1 performing:1 relatively:2 department:2 according:1 membrane:2 across:4 rev:1 making:1 alike:1 biologically:3 invariant:21 gradually:1 tier:20 remains:1 bus:1 turn:3 discus:1 mechanism:8 draguhn:1 fed:1 appropriate:9 fry:1 gate:2 convolved:1 obviates:2 top:3 remaining:1 publishing:1 lock:1 instant:1 music:4 bharucha:1 question:1 realized:1 quantity:2 occurs:2 spike:9 primary:2 traditional:6 separate:1 mapped:4 decoder:5 outer:2 seven:1 extent:2 reason:1 meg:2 code:1 relationship:1 providing:2 balance:1 equivalently:1 sector:1 implementation:2 contributed:1 gated:1 neuron:38 wire:1 eighteen:1 pinwheel:3 supporting:1 heterogeneity:1 extended:1 communication:7 precise:4 head:3 diffusor:1 tonic:12 varied:1 smoothed:1 arbitrary:2 required:2 connection:24 coherent:2 established:1 address:1 able:3 below:2 pattern:5 perception:2 appeared:1 program:1 including:1 memory:1 shifting:1 analogue:1 critical:1 suitable:1 difficulty:1 turning:1 representing:1 excitable:6 text:1 review:1 multiplication:1 determining:1 relative:7 generation:1 versus:1 digital:1 foundation:2 article:1 exciting:2 wiskott:1 principle:2 heavy:1 translation:1 excitatory:8 summary:1 supported:2 neighbor:2 van:2 curve:7 boundary:1 cortical:3 preventing:1 author:1 sensory:1 transaction:1 patel:1 neurobiological:2 active:7 alternatively:1 table:2 stimulated:6 nature:5 channel:3 transfer:1 complex:1 priming:1 domain:1 main:1 spread:4 linearly:1 arrow:4 arrival:1 mediate:1 allowed:1 neuronal:10 hemmen:1 fashion:1 aid:1 precision:1 sub:1 position:2 decoded:1 explicit:1 pe:1 communicates:1 ito:1 down:1 removing:1 specific:2 inset:4 gating:1 offset:1 virtue:1 evidence:4 intrinsic:1 circularly:4 modulates:1 magnitude:2 perceptually:1 occurring:1 chen:1 bonhoeffer:2 led:3 mill:1 simply:1 visual:7 corresponds:4 determines:1 harris:1 hahnloser:2 modulate:3 targeted:1 man:1 content:1 change:2 specifically:1 determined:1 reducing:1 except:1 entrainment:1 called:1 invariance:16 multisensory:1 indicating:1 select:1 selectively:1 inability:1 ongoing:1 tested:1 phenomenon:1 |
2,507 | 3,274 | New Outer Bounds on the Marginal Polytope
David Sontag Tommi Jaakkola
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
dsontag,tommi@csail.mit.edu
Abstract
We give a new class of outer bounds on the marginal polytope, and propose a
cutting-plane algorithm for efficiently optimizing over these constraints. When
combined with a concave upper bound on the entropy, this gives a new variational
inference algorithm for probabilistic inference in discrete Markov Random Fields
(MRFs). Valid constraints on the marginal polytope are derived through a series
of projections onto the cut polytope. As a result, we obtain tighter upper bounds
on the log-partition function. We also show empirically that the approximations of
the marginals are significantly more accurate when using the tighter outer bounds.
Finally, we demonstrate the advantage of the new constraints for finding the MAP
assignment in protein structure prediction.
1
Introduction
Graphical models such as Markov Random Fields (MRFs) have been successfully applied to a wide
variety of fields, from computer vision to computational biology. From the point of view of inference, we are generally interested in two questions: finding the marginal probabilities of specific
subsets of the variables, and finding the Maximum a Posteriori (MAP) assignment. Both of these
require approximate methods.
We focus on a particular class of variational approximation methods that cast the inference problem
as a non-linear optimization over the marginal polytope, the set of valid marginal probabilities. The
selection of appropriate marginals from the marginal polytope is guided by the (non-linear) entropy
function. Both the marginal polytope and the entropy are difficult to characterize in general, reflecting the hardness of exact inference calculations. Most message-passing algorithms for evaluating
marginals, including belief propagation and tree-reweighted sum-product (TRW), operate instead
within the local consistency polytope, characterized by pairwise consistent marginals. For general
graphs, this is an outer bound of the marginal polytope. Various approximations have also been suggested for the entropy function. For example, in the TRW algorithm [10], the entropy is decomposed
into a weighted combination of entropies of tree-structured distributions.
Our goal here is to provide tighter outer bounds on the marginal polytope. We show how this can
be achieved efficiently using a cutting-plane algorithm, iterating between solving a relaxed problem
and adding additional constraints. Cutting-plane algorithms are a well-known technique for solving
integer linear programs. The key to such approaches is to have an efficient separation algorithm
which, given an infeasible solution, can quickly find a violated constraint, generally from a very
large class of valid constraints on the set of integral solutions.
The motivation for our approach comes from the cutting-plane literature for the maximum cut problem. Barahona et al. [3] showed that the MAP problem in pairwise binary MRFs is equivalent to a
linear optimization over the cut polytope, which is the convex hull of all valid graph cuts. Tighter
relaxations were obtained by using a separation algorithm together with the cutting-plane methodology. We extend this work by deriving a new class of outer bounds on the marginal polytope for
1
non-binary and non-pairwise MRFs. The key realization is that valid constraints can be constructed
by a series of projections onto the cut polytope1 . More broadly, we seek to highlight emerging
connections between polyhedral combinatorics and probabilistic inference.
2
Background
Markov Random Fields. Let x ? ?n denote a random vector on n variables, where, for simplicity,
each variable xi takes on the values in ?i = {0, 1, . . . , k ? 1}. The MRF is specified by a set of d
real valued potentials or sufficient statistics ?(x) = {?k (x)} and a parameter vector ? ? Rd :
P
p(x; ?) = exp {h?, ?(x)i ? A(?)}, A(?) = log x??n exp {h?, ?(x)i}
where h?, ?(x)i denotes the dot product of the parameters and the sufficient statistics. In pairwise
MRFs, potentials are restricted to be at most over the edges in the graph. We assume that the
potentials are indicator functions, i.e., ?i,s (x) = ?(xi = s), and make use of the following notation:
?i;s = E? [?i;s (x)] = p(xi = s; ?) and ?ij;st = E? [?ij;st (x)] = p(xi = s, xj = t; ?).
Variational inference. The inference task is to evaluate the mean vector ? = E? [?(x)]. The
log-partition function A(?), a convex function of ?, plays a critical role in these calculations. In
particular, we can write the log-partition function in terms of its Fenchel-Legendre conjugate [11]:
A(?) = sup??M {h?, ?i ? B(?)} ,
(1)
where B(?) = ?H(?) is the negative entropy of the distribution parameterized by ? and is also
convex.M is the set of realizable mean vectors
? known as the marginal polytope. More precisely,
M := ? ? Rd | ?p(x) s.t. ? = Ep [?(x)] . The value ?? ? M that maximizes (1) is precisely the
desired mean vector corresponding to p(x; ?).
Both M and the entropy H(?) are difficult to characterize in general and have to be approximated.
We call the resulting approximate mean vectors pseudomarginals. Mean field algorithms optimize
over an inner bound on the marginal polytope (which is not convex) by restricting the marginal vectors to those coming from simpler, e.g., fully factored, distributions. The entropy can be evaluated
exactly in this case (the distribution is simple). Alternatively, we can relax the optimization to be
over an outer bound on the marginal polytope and also bound the entropy function.
Most message passing algorithms for evaluating marginal probabilities obtain locally consistent
beliefs so that the pseudomarginals over the edges agree with the singleton pseudomarginals at the
nodes. The solution is therefore sought within the local marginal polytope
P
P
LOCAL(G) = { ? ? 0 | s??i ?i;s = 1, t??j ?ij;st = ?i;s }
(2)
Clearly, M ? LOCAL(G) since true marginals are also locally consistent. For trees, M =
LOCAL(G). Both LOCAL(G) and M have the same integral vertices for general graphs [11, 6].
Belief propagation can be seen as optimizing pseudomarginals over LOCAL(G) with a (non-convex)
Bethe approximation to the entropy [15]. The tree-reweighted sum-product algorithm [10], on the
other hand, uses a concave upper bound on the entropy, expressed as a convex combination of
entropies corresponding to the spanning trees of the original graph. The log-determinant relaxation
[12] is instead based on a semi-definite outer bound on the marginal polytope combined with a
Gaussian approximation to the entropy function. Since the moment matrix M1 (?) can be written
as E? [(1 x)T (1 x)] for ? ? M, the outer bound is obtained simply by requiring only that the
pseudomarginals lie in SDEF1 (Kn ) = {? ? R+ | M1 (?) 0}.
Maximum a posteriori. The marginal polytope also plays a critical role in finding the MAP assignment. The problem is to find an assignment x ? ?n which maximizes p(x; ?), or equivalently:
max log p(x; ?) = maxn h?, ?(x)i ? A(?) = sup h?, ?i ? A(?)
x??n
x??
(3)
??M
where the log-partition function A(?) remains a constant and can be ignored. The last equality holds
because the optimal value of the linear program is obtained at a vertex (integral solution). That is,
when the MAP assignment x? is unique, the maximizing ?? is ?(x? ).
1
For reasons of clarity, our results will be given in terms of the binary marginal polytope, also called the
correlation polytope, which is equivalent to the cut polytope of the suspension graph of the MRF [6].
2
Algorithm 1 Cutting-plane algorithm for probabilistic inference
1: OUTER ? LOCAL(G)
2: repeat
3:
?? ? argmax??OUTER {h?, ?i ? B ? (?)}
4:
Choose projection graph G? , e.g. single, k, or full
5:
C ? Find Violated Inequalities(G? , ?? (?? ))
6:
OUTER ? OUTER ? C
7: until C = Rd (did not find any violated inequalities)
Cycle inequalities. The marginal polytope can be defined by the intersection of a large number of
linear inequalities. We focus on inequalities beyond those specifying LOCAL(G), in particular the
cycle inequalities [4, 2, 6]. Assume the variables are binary. Given an assignment x ? {0, 1}n ,
(i, j) ? E is cut if xi 6= xj . The cycle inequalities arise from the observation that a cycle must
have an even (possibly zero) number of cut edges. Suppose we start at node i, where xi = 0. As we
traverse the cycle, the assignment changes each time we cross a cut edge. Since we must return to
xi = 0, the assignment can only change an evenP
number of times. For a cycle
P C and any F ? C such
that |F | is odd, this constraint can be written as (i,j)?C\F ?(xi 6= xj )+ (i,j)?F ?(xi = xj ) ? 1.
Since this constraint is valid for all assignments x ? {0, 1}n , it holds also in expectation. Thus
X
X
(?ij;10 + ?ij;01 ) +
(?ij;00 + ?ij;11 ) ? 1
(4)
(i,j)?C\F
(i,j)?F
is valid for any ? ? M{0,1} , the marginal polytope of a binary pairwise MRF. For a chordless
circuit C, the cycle inequalities are facets of M{0,1} [4]. They suffice to characterize M{0,1} for a
graph G if and only if G has no K4 -minor. Although there are exponentially many cycles and cycle
inequalities for a graph, Barahona and Mahjoub [4, 6] give a simple algorithm to separate the whole
class of cycle inequalities.
To see whether any cycle inequality is violated, construct the undirected graph G0 = (V 0 , E 0 ) where
V 0 contains nodes i1 and i2 for each i ? V , and for each (i, j) ? E, the edges in E 0 are: (i1 , j1 ) and
(i2 , j2 ) with weight ?ij;10 + ?ij;01 , and (i1 , j2 ) and (i2 , j1 ) with weight ?ij;00 + ?ij;11 . Then, for
each node i ? V we find the shortest path in G0 from i1 to i2 . The shortest of all these paths will not
use both copies of any node j (otherwise
shorter), and so defines a cycle in
P the path j1 to j2 would beP
G and gives the minimum value of (i,j)?C\F (?ij;10 + ?ij;01 )+ (i,j)?F (?ij;00 + ?ij;11 ). If this
is less than 1, we have found a violated cycle inequality; otherwise, ? satisfies all cycle inequalities.
Using Dijkstra?s shortest paths algorithm with a Fibonacci heap [5], the separation problem can be
solved in time O(n2 log n + n|E|).
3
Cutting-plane algorithm
Our main result is the proposed Algorithm 1 given above. The algorithm alternates between solving for an upper bound of the log-partition function (see Eq. 1) and tightening the outer bound on
the marginal polytope by incorporating valid constraints that are violated by the current pseudomarginals. The projection graph (line 4) is not needed for binary pairwise MRFs and will be described in the next section. We start the algorithm (line 1) with the loose outer bound on the marginal
polytope given by the local consistency constraints. Tighter initial constraints, e.g., M1 (?) 0, are
possible as well.
The separation algorithm returns a feasible set C given by the intersection of halfspaces, and we intersect this with OUTER to obtain a smaller feasible space, i.e. a tighter relaxation. The experiments
in Section 5 use the separation algorithm for cycle inequalities. However, any class of valid constraints for the marginal polytope with an efficient separation algorithm may be used in line 5. Other
examples besides the cycle inequalities include the odd-wheel and bicycle odd-wheel inequalities
[6], and also linear inequalities that enforce positive semi-definiteness of M1 (?). The cutting-plane
algorithm is in effect optimizing the variational objective (Eq. 1) over a relaxation of the marginal
polytope defined by the intersection of all inequalities that can be returned in line 5.
Any entropy approximation B ? (?) can be used so long as we can efficiently solve the optimization
problem in line 3. The log-determinant and TRW entropy approximations have two appealing fea3
Figure 1: Illustration of the projection ?? for one edge (i, j) ? E where ?i = {0, 1, 2} and
?j = {0, 1, 2, 3}. The projection graph G? , shown on the right, has 3 partitions for i and 7 for j.
tures. First, as upper bounds they permit the algorithm to be used for obtaining tighter upper bounds
on the log-partition function. Second, the objective functions to be maximized are convex and can
be solved efficiently using conditional gradient or other methods.
When the algorithm terminates, we can use the last ?? vector as an approximation to the single node
and edge marginals. The results given in Section 5 use this method. The algorithm for MAP is the
same, excluding the entropy function in line 3; the optimization is simply a linear program. Since all
integral vectors in the relaxation OUTER are extreme points of the marginal polytope, any integral
?? is the MAP assignment.
4
Generalization to non-binary MRFs
In this section we give a new class of valid inequalities for the marginal polytope of non-binary and
non-pairwise MRFs, and show how to efficiently separate this exponentially large set of inequalities.
The key theoretical idea is to project the marginal polytope onto different binary marginal polytopes.
Aggregation and projection are well-known techniques in polyhedral combinatorics for obtaining
valid inequalities [6]. Given a linear projection ?(x) = Ax, any valid inequality c0 ?(x) ? b for
?(x) also gives the valid inequality c0 Ax ? b for x. We obtain new inequalities by aggregating the
values of each variable.
For each variable i, let ?iq be a partition of its values into two non-empty sets, i.e., the map ?iq :
?i ? {0, 1} is surjective. Let ?i = {?i1 , ?i2 , . . .} be a collection of partitions of variable i. Define
the projection graph G? = (V? , E? ) so that there is a node for each ?iq ? ?i , and nodes ?iq and
?jr are connected if (i, j) ? E. We call the graph consisting of all possible variable partitions the
full projection graph. In Figure 1 we show part of the full projection graph corresponding to one
edge (i, j), where xi has three values and xj has four values. Intuitively, a partition for a variable
splits its values into two clusters, resulting in a binary variable. For example, the (new) variable
corresponding to the partition {0, 1}{2} of xi is 1 if xi = 2, and 0 otherwise. The following gives
a projection of marginal vectors of non-binary MRFs onto the marginal polytope of the projection
graph G? , which has binary variables for each partition.
Definition 1. The
node v = ?iq ? V? asP linear map ?? takes ? ? M and for each
q
r
0
0
signs ?v;1 =
s??i s.t. ?iq (s)=1 ?i;s and for each edge e = (?i , ?j ) ? E? assigns ?e;11 =
P
si ??i ,sj ??j s.t. ? q (si )=? r (sj )=1 ?ij;si sj .
i
j
To construct valid inequalities for each projection we need to characterize the image space. Let
M{0,1} (G? ) denote the binary marginal polytope of the projection graph.
Theorem 1. The image of the projection ?? is M{0,1} (G? ), i.e. ?? : M ? M{0,1} (G? ).
Proof. Since ?? is a linear map, it suffices to show that, for every extreme point ? ? M, ?? (?) ?
M{0,1} (G? ). The extreme points of M correspond one-to-one with P
assignments x ? ?n . Given an
q
0
extreme point ? ? M and variable v = ?i ? V? , define x (?)v = s??i s.t. ?q (s)=1 ?i;s . Since ?
i
is an extreme point, ?i;s = 1 for exactly one value s, which implies that x0 (?) ? {0, 1}|V? | . Then,
?? (?) = E[?(x0 (?))], showing that ?? (?) ? M{0,1} (G? ).
This result allows valid inequalities for M{0,1} (G? ) to carry over to M. In general, the projection ?? will not be surjective. Suppose every variable has k values. The single projection graph,
4
where |?i | = 1 for all i, has one node per variable and is surjective. The full projection graph has
O(2k ) nodes per variable. A cutting-plane algorithm may begin by projecting onto a small graph,
then expanding to larger graphs only after satisfying all inequalities given by the smaller one. The
k?projection graph Gk = (Vk , Ek ) has k partitions per variable corresponding to each value versus
all the other values.
These projections yield a new class of cycle inequalities for the marginal polytope. Consider a single
projection graph G? , a cycle C in G, and any F ? C such that |F | is odd. Let ?i be the partition
for node i. We obtain the following valid inequality for ? ? M by applying the projection ?? and
the cycle inequality:
X
X
??ij (x0i 6= x0j ) +
??ij (x0i = x0j ) ? 1,
(5)
(i,j)?C\F
(i,j)?F
where
??ij (x0i 6= x0j )
X
=
?ij;si sj
(6)
?ij;si sj .
(7)
si ??i ,sj ??j s.t. ?i (si )6=?j (sj )
??ij (x0i = x0j )
X
=
si ??i ,sj ??j s.t. ?i (si )=?j (sj )
P
P
It is revealing to contrast (5) with (i,j)?C\F ?(xi 6= xj ) + (i,j)?F ?(xi = xj ) ? 1. For x ? ?n ,
the latter holds only for |F | = 1. We can only obtain the more general inequality by fixing a partition
of each node?s values.
Theorem 2. For every single projection graph G? and every cycle inequality arising from a chordless circuit C on G? , ?? ? LOCAL(G)\M such that ? violates that inequality.
Proof. For each variable i ? V , choose si , ti s.t. ?i (si ) = 1 and ?i (ti ) = 0. Assign ?i;q = 0
for q ? ?i \{si , ti }. Similarly, for every (i, j) ? E, assign ?ij;qr = 0 for q ? ?i \{si , ti } and
r ? ?j \{sj , tj }. The polytope resulting from the projection of M onto the remaining values (e.g.
?i;si ) is isomorphic to M{0,1} for the graph G? . Barahona and Mahjoub [4] showed that the cycle
inequality on the chordless circuit C is facet-defining for M{0,1} . Since C is over ? 3 variables from
G, this cannot be a facet of LOCAL(G). Let LOCAL{0,1} be the projection of LOCAL(G) onto the
remaining values. Thus, ??0 ? LOCAL{0,1} \M{0,1} , and we can assign ? accordingly.
Note that the theorem implies that the projected cycle inequalities are strictly tighter than
LOCAL(G), but it does not characterize how much is gained.
If all n variables have k values, then there are O((2k )n ) different single projection graphs. However,
since for every cycle inequality in the single projection graphs there is an equivalent cycle inequality
in the full projection graph, it suffices to consider just the full projection graph. Thus, even though
the projection is not surjective, the full projection graph, which has O(n2k ) nodes, allows us to
efficiently obtain a tighter relaxation than any combination of projection graphs would give. In
particular, the separation problem for all cycle inequalities (5) for all single projection graphs, when
we allow some additional valid inequalities for M (arising from the cycle using more than one
partition for some variables), can now be solved in time O(poly(n, 2k )).
Related work. In earlier work, Althaus et al. [1] analyze the GMEC polyhedron, which is equivalent
to the marginal polytope. They use a similar value-aggregation technique to derive valid constraints
from the triangle inequalities. Koster et al. [8] investigate the Partial Constraint Satisfaction Problem polytope, which is also equivalent to the marginal polytope. They used value-aggregation to
show that a class of cycle inequalities (corresponding to Eq. 5 for |F | = 1) are valid for this polytope, and give an algorithm to separate the inequalities for a single cycle. Interestingly, both papers
showed that these constraints are facet-defining.
Non-pairwise Markov random fields. These results could be applied to non-pairwise MRFs by
first projecting the marginal vector onto the marginal polytope of a pairwise MRF. More generally,
suppose we include additional variables corresponding to the joint probability of a cluster of variables. We need to add constraints enforcing that all variables in common between two clusters have
the same marginals. For pairwise clusters these are simply the usual local consistency constraints.
We can now apply the projections of the previous section, considering various partitions of each
cluster variable, to obtain a tighter relaxation of the marginal polytope.
5
0.5
0.5
TRW
Logdet
TRW + PSD
0.4
0.3
0.2
0.1
0
.5
Logdet + Cycle
0.4
TRW + Marg
Average Error
Average Error
Logdet + PSD
TRW + Cycle
Logdet + Marg
0.3
0.2
0.1
2
4
6
Coupling, ? ? U[?x, x]
0
.5
8
2
4
6
Coupling, ? ? U[?x, x]
8
Figure 2: Accuracy of single node marginals on 10 node complete graph (100 trials).
5
Experiments
Computing marginals. We experimented with Algorithm 1 using both the log-determinant [12] and
the TRW [10] entropy approximations. These trials are on Ising models, which are pairwise MRFs
with xi ? {?1, 1} and potentials ?i (x) = xi for i ? V and ?ij (x) = xi xj for (i, j) ? E. Although
TRW can efficiently optimize over the spanning tree polytope, for these experiments we simply use a
weighted distribution over spanning trees, where each tree?s weight is the sum of the absolute value
of its edge weights ?ij . The edge appearance probabilities for this distribution can be efficiently
computed using the Matrix Tree Theorem [13]. We optimize the TRW objective with conditional
gradient, using linear programming after each gradient step to project onto OUTER. We used the
glpkmex and YALMIP optimization packages within Matlab, and wrote the separation algorithm
for the cycle inequalities in Java.
In Figure 2 we show results for 10 node complete graphs with ?i ? U [?1, 1] and ?ij ? U [?x, x],
where the coupling strength is varied along the x-axis of the figure. For each data point we averaged
the results over 100 trials. The y-axis shows the average `1 error of the single node marginals. These
MRFs are highly coupled, and loopy belief propagation (not shown) with a .5 decay rate seldom converges. The TRW and log-determinant algorithms, optimizing over the local consistency polytope,
give pseudomarginals only slightly better than loopy BP. Even adding the positive semi-definite
constraint M1 (?) 0, for which TRW must be optimized using conditional gradient and semidefinite programming for the projection step, does not improve the accuracy by much. However,
both entropy approximations give significantly better pseudomarginals when used by our algorithm
together with the cycle inequalities (see ?TRW + Cycle? and ?Logdet + Cycle? in the figures). For
small MRFs, we can exactly represent the marginal polytope as the convex hull of its 2n vertices.
We found that the cycle inequalities give nearly as good accuracy as the exact marginal polytope
(see ?TRW + Marg? and ?Logdet + Marg?).
Our work sheds some light on the relative value of the entropy approximation compared to the
relaxation of the marginal polytope. When the MRF is weakly coupled, both entropy approximations
do reasonably well using the local consistency polytope. This is not surprising: the limit of weak
coupling is a fully disconnected graph, for which both the entropy approximation and the marginal
polytope relaxation are exact. With the local consistency polytope, both entropy approximations
get steadily worse as the coupling increases. In contrast, using the exact marginal polytope, we
see a peak at ? = 2, then a steady improvement in accuracy as the coupling term grows. This
occurs because the limit of strong coupling is the MAP problem, for which using the exact marginal
polytope will give exact results. The interesting region is near the peak ? = 2, where the entropy
term is neither exact nor outweighed by the coupling. Our algorithm seems to ?solve? the part of
the problem caused by the local consistency polytope relaxation: TRW?s accuracy goes from .33 to
.15, and log-determinant?s accuracy from .17 to .076. The fact that neither entropy approximation
can achieve accuracy below .07, even with the exact marginal polytope, motivates further research
on improving this part of the approximation.
6
10?10 Grid
0.2
0.1
1 2 3 4 5 6 7 8 9 10
Cutting Plane Iterations
0.3
0.2
0.1
Average Prediction Error
0.3
1
0.4
0.4
0.3
0.2
0.1
0
1
2
3
4
5
6
7
20 Node Complete
20 Node Complete
0.5
0.4
Average l Error
Average Prediction Error
Average l1 Error
10?10 Grid
8
9 10
3
Cutting Plane Iterations
9
0.8
0.6
0.4
0.2
0
3
15 21 27 33 39 45
Cutting Plane Iterations
9
15 21 27 33 39 45
Cutting Plane Iterations
Cutting Plane Iterations
Figure 3: Accuracy of single node marginals with TRW entropy, ?i ? U [?1, 1] and ?ij ? U [?4, 4].
20
15
10
5
0
0
500
Amino Acids (Variables)
1000
Figure 4: MAP for protein side-chain prediction with Rosetta energy function.
Next, we looked at the number of iterations (in terms of the loop in Algorithm 1) the algorithm takes
before all cycle inequalities are satisfied. In each iteration we add to OUTER at most2 n violated
cycle inequalities, coming from the n shortest paths. In Figure 3 we show boxplots of the l1 error
of the single node marginals for both 10x10 grid MRFs (40 trials) and 20 node complete MRFs (10
trials). We also show whether the pseudomarginals are on the correct side of .5, which is important if
we were doing prediction based on the results from approximate inference. The middle line gives the
median, the boxes show the upper and lower quartiles, and the whiskers show the extent of the data.
Iteration 1 corresponds to TRW with only the local consistency constraints. For the grid MRFs, all of
the cycle inequalities were satisfied within 10 iterations. We observed the same convergence results
on a 30x30 grid, although we could not assess the accuracy due to the difficulty of exact marginals
calculation. For the complete graph MRFs, the algorithm took many more iterations before all cycle
inequalities were satisfied.
Protein side-chain prediction. We next applied our algorithm to the problem of predicting protein
side-chain configurations. Given the 3-dimensional structure of a protein?s backbone, the task is to
predict the relative angle of each amino acid?s side-chain. The angles are discretized into at most
45 values. Yanover et al. [14] showed that minimization of the Rosetta energy function corresponds
to finding the MAP assignment of a non-binary pairwise MRF. They also showed that the treereweighted max-product algorithm [9] can be used to solve the LP relaxation given by LOCAL(G),
and that this succeeds in finding the MAP assignment for 339 of the 369 proteins in their data set.
However, the optimal solution to the LP relaxation for the remaining 30 proteins, arguably the most
difficult of the proteins, is fractional.
Using the k-projection graph and projected cycle inequalities, we succeeded in finding the MAP
assignment for all proteins except for the protein ?1rl6?. We show in Figure 4 the number of cuttingplane iterations needed for each of the 30 proteins. In each iteration, we solve the LP relaxation,
and, if the solution is not integral, run the separation algorithm to find violated inequalities. For the
protein ?1rl6?, after 12 cutting-plane iterations, the solution was not integral, and we could not find
any violated cycle inequalities using the k-projection graph. We then tried using the full projection
graph, and found the MAP after just one (additional) iteration. Figure 4 shows one of the cycle
inequalities (5) in the full projection graph that was found to be violated. The cut edges indicate
the 3 edges in F . The violating ? had ?36;s = .1667 for s ? {0, 1, 2, 3, 4, 5}, ?38;6 = .3333,
?38;4 = .6667, ?43;s = .1667 for s ? {1, 2, 4, 5}, ?43;3 = .3333, and zero for all other values of
these variables. This example shows that the relaxation given by the full projection graph is strictly
tighter than that of the k-projection graph.
2
Many fewer inequalities were added, since not all cycles in G0 are simple cycles in G.
7
The commercial linear programming solver CPLEX 10.0 solves each LP relaxation in under 75 seconds. Using simple heuristics, the separation algorithm runs in seconds, and we find each protein?s
MAP assignment in under 11.3 minutes. Kingsford et al. [7] found, and we also observed, that
CPLEX?s branch-and-cut algorithm for solving integer linear programs also works well for these
problems. One interesting future direction would be to combine the two approaches, using our new
outer bounds within the branch-and-cut scheme. Our results show that the new outer bounds are
powerful, allowing us to find the MAP solution for all of the MRFs, and suggesting that using them
will also lead to significantly more accurate marginals for non-binary MRFs.
6
Conclusion
The facial structure of the cut polytope, equivalently, the binary marginal polytope, has been wellstudied over the last twenty years. The cycle inequalities are just one of many large classes of valid
inequalities for the cut polytope for which efficient separation algorithms are known. Our theoretical
results can be used to derive outer bounds for the marginal polytope from any of the valid inequalities
on the cut polytope. Our approach is particularly valuable because it takes advantage of the sparsity
of the graph, and only uses additional constraints when they are guaranteed to affect the solution.
An interesting open problem is to develop new message-passing algorithms which can incorporate
cycle and other inequalities, to efficiently do the optimization within the cutting-plane algorithm.
Acknowledgments
The authors thank Amir Globerson and David Karger for helpful discussions. This work was supported in part by the DARPA Transfer Learning program. D.S. was also supported by a National
Science Foundation Graduate Research Fellowship.
References
[1] E. Althaus, O. Kohlbacher, H.-P. Lenhof, and P. M?uller. A combinatorial approach to protein docking
with flexible side-chains. In RECOMB ?00, pages 15?24, 2000.
[2] F. Barahona. On cuts and matchings in planar graphs. Mathematical Programming, 60:53?68, 1993.
[3] F. Barahona, M. Gr?otschel, M. Junger, and G. Reinelt. An application of combinatorial optimization to
statistical physics and circuit layout design. Operations Research, 36(3):493?513, 1988.
[4] F. Barahona and A. R. Mahjoub. On the cut polytope. Mathematical Programming, 36:157?173, 1986.
[5] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, 2nd
edition, 2001.
[6] M. M. Deza and M. Laurent. Geometry of Cuts and Metrics, volume 15 of Algorithms and Combinatorics.
Springer, 1997.
[7] C. L. Kingsford, B. Chazelle, and M. Singh. Solving and analyzing side-chain positioning problems using
linear and integer programming. Bioinformatics, 21(7):1028?1039, 2005.
[8] A. Koster, S. van Hoesel, and A. Kolen. The partial constraint satisfaction problem: Facets and lifting
theorems. Operations Research Letters, 23:89?97, 1998.
[9] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on trees: message-passing
and linear programming. IEEE Transactions on Information Theory, 51(11):3697?3717, November 2005.
[10] M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition function.
IEEE Transactions on Information Theory, 51:2313?2335, July 2005.
[11] M. Wainwright and M. I. Jordan. Graphical models, exponential families and variational inference. Technical Report 649, UC Berkeley, Dept. of Statistics, 2003.
[12] M. Wainwright and M. I. Jordan. Log-determinant relaxation for approximate inference in discrete
Markov random fields. IEEE Transactions on Signal Processing, 54(6):2099?2109, June 2006.
[13] D. B. West. Introduction to Graph Theory. Prentice Hall, 2001.
[14] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation ? an
empirical study. JMLR Special Issue on Machine Learning and Large Scale Optimization, 7:1887?1907,
September 2006.
[15] J. Yedidia, W. Freeman, and Y. Weiss. Bethe free energy, Kikuchi approximations, and belief propagation
algorithms. Technical Report 16, Mitsubishi Electric Research Lab, 2001.
8
| 3274 |@word trial:5 determinant:6 middle:1 seems:1 nd:1 c0:2 open:1 barahona:6 seek:1 tried:1 mitsubishi:1 carry:1 moment:1 initial:1 configuration:1 series:2 contains:1 karger:1 interestingly:1 current:1 chazelle:1 surprising:1 si:14 written:2 must:3 partition:19 j1:3 pseudomarginals:9 intelligence:1 fewer:1 amir:1 accordingly:1 plane:16 node:23 traverse:1 simpler:1 treereweighted:1 mathematical:2 along:1 constructed:1 combine:1 polyhedral:2 x0:2 pairwise:13 hardness:1 nor:1 discretized:1 freeman:1 decomposed:1 considering:1 solver:1 project:2 begin:1 notation:1 suffice:1 maximizes:2 circuit:4 rivest:1 mahjoub:3 backbone:1 emerging:1 finding:7 berkeley:1 every:6 ti:4 concave:2 shed:1 exactly:3 arguably:1 positive:2 before:2 local:23 aggregating:1 limit:2 analyzing:1 laurent:1 path:5 specifying:1 graduate:1 averaged:1 unique:1 acknowledgment:1 globerson:1 definite:2 intersect:1 empirical:1 significantly:3 revealing:1 projection:42 java:1 protein:14 get:1 onto:9 wheel:2 selection:1 cannot:1 prentice:1 applying:1 marg:4 optimize:3 equivalent:5 map:19 maximizing:1 go:1 layout:1 convex:8 simplicity:1 assigns:1 factored:1 bep:1 deriving:1 play:2 suppose:3 commercial:1 exact:9 programming:8 us:2 agreement:1 approximated:1 satisfying:1 particularly:1 cut:18 ising:1 ep:1 role:2 observed:2 solved:3 region:1 cycle:46 connected:1 valuable:1 halfspaces:1 weakly:1 solving:5 singh:1 triangle:1 yalmip:1 matchings:1 joint:1 darpa:1 various:2 artificial:1 heuristic:1 larger:1 valued:1 solve:4 relax:1 otherwise:3 statistic:3 advantage:2 took:1 propose:1 product:4 coming:2 j2:3 loop:1 realization:1 achieve:1 qr:1 convergence:1 empty:1 cluster:5 converges:1 kikuchi:1 iq:6 derive:2 coupling:8 fixing:1 develop:1 x0i:4 minor:1 ij:27 odd:4 eq:3 solves:1 strong:1 come:1 implies:2 indicate:1 tommi:2 direction:1 guided:1 correct:1 hull:2 quartile:1 violates:1 require:1 assign:3 suffices:2 generalization:1 tighter:11 strictly:2 hold:3 hall:1 exp:2 bicycle:1 predict:1 sought:1 heap:1 estimation:1 combinatorial:2 successfully:1 weighted:2 minimization:1 uller:1 mit:2 clearly:1 kingsford:2 gaussian:1 asp:1 jaakkola:3 derived:1 focus:2 ax:2 june:1 vk:1 improvement:1 polyhedron:1 contrast:2 realizable:1 posteriori:2 inference:12 helpful:1 mrfs:19 interested:1 i1:5 issue:1 flexible:1 special:1 uc:1 marginal:48 field:7 construct:2 biology:1 nearly:1 future:1 report:2 national:1 argmax:1 consisting:1 cplex:2 geometry:1 psd:2 message:4 investigate:1 highly:1 leiserson:1 wellstudied:1 extreme:5 semidefinite:1 light:1 tj:1 chain:6 accurate:2 integral:7 edge:13 partial:2 succeeded:1 shorter:1 facial:1 tree:10 desired:1 theoretical:2 fenchel:1 earlier:1 facet:5 assignment:15 loopy:2 vertex:3 subset:1 gr:1 characterize:5 kn:1 combined:2 st:3 peak:2 csail:1 probabilistic:3 physic:1 together:2 quickly:1 suspension:1 satisfied:3 choose:2 possibly:1 worse:1 ek:1 cuttingplane:1 return:2 suggesting:1 potential:4 singleton:1 kolen:1 combinatorics:3 caused:1 view:1 lab:1 analyze:1 sup:2 doing:1 start:2 aggregation:3 ass:1 accuracy:9 acid:2 efficiently:9 maximized:1 correspond:1 yield:1 outweighed:1 weak:1 definition:1 energy:3 steadily:1 proof:2 massachusetts:1 fractional:1 reflecting:1 trw:17 violating:1 methodology:1 planar:1 wei:2 evaluated:1 though:1 box:1 just:3 correlation:1 until:1 hand:1 propagation:5 defines:1 grows:1 effect:1 requiring:1 true:1 equality:1 laboratory:1 i2:5 reweighted:2 steady:1 complete:6 demonstrate:1 fibonacci:1 l1:2 sdef1:1 image:2 variational:5 common:1 empirically:1 exponentially:2 volume:1 extend:1 m1:5 marginals:14 cambridge:1 rd:3 seldom:1 consistency:8 grid:5 similarly:1 had:1 dot:1 add:2 showed:5 optimizing:4 inequality:58 binary:16 seen:1 minimum:1 additional:5 relaxed:1 shortest:4 july:1 semi:3 branch:2 full:10 signal:1 x10:1 positioning:1 technical:2 characterized:1 calculation:3 cross:1 long:1 prediction:6 mrf:6 vision:1 expectation:1 metric:1 iteration:14 represent:1 achieved:1 background:1 fellowship:1 median:1 operate:1 undirected:1 jordan:2 integer:3 call:2 near:1 split:1 meltzer:1 variety:1 xj:8 affect:1 inner:1 idea:1 whether:2 returned:1 sontag:1 passing:4 logdet:6 matlab:1 ignored:1 generally:3 iterating:1 stein:1 locally:2 sign:1 arising:2 per:3 broadly:1 discrete:2 write:1 n2k:1 key:3 four:1 clarity:1 k4:1 neither:2 boxplots:1 graph:46 relaxation:17 year:1 sum:3 koster:2 package:1 parameterized:1 angle:2 run:2 powerful:1 letter:1 family:1 x0j:4 separation:11 bound:23 guaranteed:1 strength:1 constraint:22 precisely:2 bp:1 structured:1 maxn:1 alternate:1 combination:3 disconnected:1 legendre:1 conjugate:1 smaller:2 terminates:1 jr:1 slightly:1 cormen:1 appealing:1 lp:4 intuitively:1 restricted:1 projecting:2 agree:1 remains:1 loose:1 needed:2 operation:2 permit:1 yedidia:1 apply:1 appropriate:1 enforce:1 original:1 denotes:1 remaining:3 include:2 graphical:2 surjective:4 objective:3 g0:3 question:1 added:1 chordless:3 occurs:1 looked:1 usual:1 september:1 gradient:4 separate:3 thank:1 otschel:1 outer:22 evaluate:1 polytope:58 extent:1 reinelt:1 spanning:3 reason:1 enforcing:1 willsky:2 besides:1 illustration:1 equivalently:2 difficult:3 gk:1 negative:1 tightening:1 design:1 motivates:1 twenty:1 allowing:1 upper:8 observation:1 markov:5 november:1 dijkstra:1 defining:2 excluding:1 varied:1 david:2 cast:1 specified:1 connection:1 optimized:1 polytopes:1 beyond:1 suggested:1 below:1 sparsity:1 program:5 including:1 max:2 belief:6 wainwright:4 critical:2 satisfaction:2 difficulty:1 predicting:1 indicator:1 yanover:2 scheme:1 improve:1 technology:1 axis:2 hoesel:1 coupled:2 docking:1 literature:1 relative:2 fully:2 whisker:1 highlight:1 interesting:3 tures:1 recomb:1 versus:1 foundation:1 sufficient:2 consistent:3 deza:1 repeat:1 last:3 copy:1 supported:2 infeasible:1 free:1 side:7 allow:1 institute:1 wide:1 absolute:1 van:1 valid:21 evaluating:2 author:1 collection:1 projected:2 transaction:3 sj:10 approximate:4 cutting:16 wrote:1 xi:17 alternatively:1 bethe:2 reasonably:1 transfer:1 expanding:1 obtaining:2 improving:1 rosetta:2 poly:1 electric:1 did:1 main:1 motivation:1 whole:1 arise:1 edition:1 n2:1 amino:2 west:1 definiteness:1 exponential:1 lie:1 jmlr:1 theorem:5 minute:1 specific:1 showing:1 experimented:1 decay:1 incorporating:1 restricting:1 adding:2 gained:1 lifting:1 x30:1 entropy:26 intersection:3 simply:4 appearance:1 expressed:1 springer:1 corresponds:2 satisfies:1 ma:1 conditional:3 goal:1 feasible:2 change:2 except:1 called:1 isomorphic:1 succeeds:1 dsontag:1 latter:1 bioinformatics:1 violated:10 incorporate:1 dept:1 |
2,508 | 3,275 | Adaptive Embedded Subgraph Algorithms using
Walk-Sum Analysis
Venkat Chandrasekaran, Jason K. Johnson, and Alan S. Willsky
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
venkatc@mit.edu, jasonj@mit.edu, willsky@mit.edu
Abstract
We consider the estimation problem in Gaussian graphical models with arbitrary
structure. We analyze the Embedded Trees algorithm, which solves a sequence of
problems on tractable subgraphs thereby leading to the solution of the estimation
problem on an intractable graph. Our analysis is based on the recently developed
walk-sum interpretation of Gaussian estimation. We show that non-stationary iterations of the Embedded Trees algorithm using any sequence of subgraphs converge in walk-summable models. Based on walk-sum calculations, we develop
adaptive methods that optimize the choice of subgraphs used at each iteration with
a view to achieving maximum reduction in error. These adaptive procedures provide a significant speedup in convergence over stationary iterative methods, and
also appear to converge in a larger class of models.
1
Introduction
Stochastic processes defined on graphs offer a compact representation for the Markov structure in a
large collection of random variables. We consider the class of Gaussian processes defined on graphs,
or Gaussian graphical models, which are used to model natural phenomena in many large-scale applications [1, 2]. In such models, the estimation problem can be solved by directly inverting the
information matrix. However, the resulting complexity is cubic in the number of variables, thus
being prohibitively complex in applications involving hundreds of thousands of variables. Algorithms such as Belief Propagation and the junction-tree method are effective for computing exact
estimates in graphical models that are tree-structured or have low treewidth [3], but for graphs with
high treewidth the junction-tree approach is intractable.
We describe a rich class of iterative algorithms for estimation in Gaussian graphical models with
arbitrary structure. Specifically, we discuss the Embedded Trees (ET) iteration [4] that solves a
sequence of estimation problems on trees, or more generally tractable subgraphs, leading to the solution of the original problem on the intractable graph. We analyze non-stationary iterations of the
ET algorithm that perform inference calculations on an arbitrary sequence of subgraphs. Our analysis is based on the recently developed walk-sum interpretation of inference in Gaussian graphical
models [5]. We show that in the broad class of so-called walk-summable models, the ET algorithm
converges for any arbitrary sequence of subgraphs used. The walk-summability of a model is easily
tested [5, 6], thus providing a simple sufficient condition for the convergence of such non-stationary
algorithms. Previous convergence results [6, 7] analyzed stationary or ?cyclo-stationary? iterations
that use the same subgraph at each iteration or cycle through a fixed sequence of subgraphs. The
focus of this paper is on analyzing, and developing algorithms based on, arbitrary non-stationary
iterations that use any (non-cyclic) sequence of subgraphs, and the recently developed concept of
walk-sums appears to be critical to this analysis.
1
Given this great flexibility in choosing successive iterative steps, we develop algorithms that adaptively optimize the choice of subgraphs to achieve maximum reduction in error. These algorithms
take advantage of walk-sum calculations, which are useful in showing that our methods minimize
an upper-bound on the error at each iteration. We develop two procedures to adaptively choose subgraphs. The first method finds the best tree at each iteration by solving an appropriately formulated
maximum-weight spanning tree problem, with the weight of each edge being a function of the partial correlation coefficient of the edge and the residual errors at the nodes that compose the edge.
The second method, building on this first method, adds extra edges in a greedy manner to the tree
resulting from the first method to form a thin hypertree. Simulation results demonstrate that these
non-stationary algorithms provide a significant speedup in convergence over stationary and cyclic
iterative methods. Since the class of walk-summable models is broad (including attractive models,
diagonally dominant models, and so-called pairwise-normalizable models), our methods provide a
convergent, computationally attractive method for inference. We also provide empirical evidence to
show that our adaptive methods (with a minor modification) converge in many non-walk-summable
models when stationary iterations diverge. The estimation problem in Gaussian graphical models
involves solving a linear system with a sparse, symmetric, positive-definite matrix. Such systems
are commonly encountered in other areas of machine learning and signal processing as well [8, 9].
Therefore, our methods are broadly applicable beyond estimation in Gaussian models.
Some of the results presented here appear in more detail in a longer paper [10], which provides
complete proofs as well as a detailed description of walk-sum diagrams that give a graphical interpretation of our algorithms (we show an example in this paper). The report also considers problems
involving communication ?failure? between nodes for distributed sensor network applications.
2
Background
Let G = (V, E) be a graph with vertices V , and edges E ? V2 that link pairs of vertices together.
Here, V2 represents the set of all unordered pairs of vertices. Consider a Gaussian distribution in
information form [5] p(x) ? exp{? 21 xT Jx + hT x}, where J ?1 is the covariance matrix and J ?1 h
is the mean. The matrix J, also called the information matrix, is sparse according to graph G, i.e.
Js,t = Jt,s = 0 if and only if {s, t} ?
/ E. Thus, G represents the graph with respect to which p(x)
is Markov, i.e. p(x) satisfies the conditional independencies implied by the separators of G. The
Gaussian mean estimation problem reduces to solving the following linear system of equations:
Jx = h,
(1)
where x is the mean vector. Convergent iterations that compute the mean can also be used in turn to
compute variances using a variety of methods [4, 11]. Thus, we focus on the problem of estimating
the mean at each node. Throughout the rest of this paper, we assume that J is normalized to have
1?s along the diagonal.1 Such a re-scaling does not affect the convergence results in this paper, and
our analysis and algorithms can be easily generalized to the un-normalized case [10].
2.1
Walk-sums
We give a brief overview of the walk-sum framework developed in [5]. Let J = I ? R. The offdiagonal entries of the matrix R have the same sparsity structure as that of J, and consequently that
of the graph G. For Gaussian processes defined on graphs, element Rs,t corresponds to the conditional correlation coefficient between the variables at vertices s and t conditioned on knowledge of
all the other variables (also known as the partial correlation coefficient [5]). A walk is a sequence of
vertices {wi }`i=0 such that each step {wi , wi+1 } ? E, 0 ? i ? ` ? 1, with no restriction on crossing
the same vertex or traversing the same edge multiple times. The weight of a walk is the product of the
Q`?1
edge-wise partial correlation coefficients of the edges composing the walk: ?(w) , i=0 Rwi ,wi+1 .
We then have that (R` )s,t is the sum of the weights of all length-` walks from s to t in G. With this
point of view, we can interpret J ?1 as follows:
?
?
X
X
`
(J ?1 )s,t = ((I ? R)?1 )s,t =
(R` )s,t =
?(s ? t),
(2)
`=0
1
This can be achieved by performing the transformation J? ?
containing the diagonal entries of J.
2
`=0
1
1
D? 2 JD? 2 ,
where D is a diagonal matrix
`
where ?(s ? t) represents the sum of the weights of all the length-` walks from s to t (the set
of all such walks is finite). Thus, (J ?1 )s,t is the length-ordered sum over all walks in G from s
to t. This, however, is a very specific way to compute the inverse that converges if the spectral
radius %(R) < 1. Other algorithms may compute walks according to different orders (rather than
length-based orders). To analyze arbitrary algorithms that submit to a walk-sum interpretation, the
following concept of walk-summability was developed in [5]. A model is said to be walk-summable
if for each pair of vertices s, t ? V , the absolute sum over all walks from s to t in G converges:
X
? ? t) ,
?(s
|?(w)| < ?.
(3)
w?W(s?t)
? ? t) denotes the absolute
Here, W(s ? t) represents the set of all walks from s to t, and ?(s
2
walk-sum over this set. Based on the absolute convergence condition, walk-summability implies
that walk-sums over a countable set of walks in G can be computed in any order. As a result, we
have the following interpretation in walk-summable models:
(J ?1 )s,t
xt
= ?(s ? t),
=
(J
?1
h)t =
(4)
X
hs ?(s ? t) , ?(h; ? ? t),
(5)
s?V
where the wildcard character ? denotes a union over all vertices in V , and ?(h; W) denotes a reweighting of each walk in W by the corresponding h value at the starting node. Note that in (4) we
relax the constraint that the sum is ordered by length, and do not explicitly specify an ordering on
the walks (such as in (2)). In words, (J ?1 )s,t is the walk-sum over the set of all walks from s to t,
and xt is the walk-sum over all walks ending at t, re-weighted by h.
? < 1, where R
? denotes the
As shown in [5], the walk-summability of a model is equivalent to %(R)
matrix of the absolute values of the elements of R. Also, a broad class of models are walk-summable,
including diagonally-dominant models, so-called pairwise normalizable models, and models for
which the underlying graph G is non-frustrated, i.e. each cycle has an even number of negative
partial correlation coefficients. Walk-summability implies that a model is valid, i.e. has positivedefinite information/covariance.
Concatenation of walks We briefly describe the concatenation operation for walks and walk-sets,
which plays a key role in walk-sum analysis. Let u = u0 ? ? ? uend and v = vstart v1 ? ? ? v`(v) be walks
with uend = vstart . The concatenation of these walks is defined to be u ? v , u0 ? ? ? uend v1 ? ? ? v`(v) .
Now consider a walk-set U with all walks ending at uend and another walk-set V with all walks
beginning at vstart . If uend = vstart , then the concatenation of U and V is defined:
U ? V , {u ? v : u ? U, v ? V}.
2.2
Embedded Trees algorithm
We describe the Embedded Trees iteration that performs a sequence of updates on trees, or more
generally tractable subgraphs, leading to the solution of (1) on an intractable graph. Each iteration
involves an inference calculation on a subgraph of all the variables V . Let (V, S) be some subgraph
of G, i.e. S ? E (see examples in Figure 1). Let J be split according to S as J = JS ? KS , so that
the entries of J corresponding to edges in S are assigned to JS , and those corresponding to E\S are
part of KS . The diagonal entries of J are all part of JS ; thus, KS has zeroes along the diagonal.3
Based on this splitting, we can transform (1) to JS x = KS x+h, which suggests a natural recursion:
JS x
b(n) = KS x
b(n?1) + h. If JS is invertible, and it is tractable to apply JS?1 to a vector, then ET
offers an effective method to solve (1) (assuming %(JS?1 KS ) < 1). If the subgraph used changes
with each iteration, then we obtain the following non-stationary ET iteration:
x
b(n) = JS?1
(KSn x
b(n?1) + h),
n
{Sn }?
n=1
(6)
where
is any arbitrary sequence of subgraphs. An important degree of freedom is the
choice of the subgraph Sn at iteration n, which forms the focus of Section 4 of this paper. In [10] we
also consider a more general class of algorithms that update subsets of variables at each iteration.
2
3
We generally denote the walk-sum of the set W(?) by ?(?).
KS can have non-zero diagonal in general, but we only consider the zero diagonal case here.
3
Figure 1: (Left) G and three embedded trees S1 , S2 , S3 ; (Right) Corresponding walk-sum diagram.
3
Walk-Sum Analysis and Convergence of the Embedded Trees algorithm
In this section, we provide a walk-sum interpretation for the ET algorithm. Using this analysis, we
show that the non-stationary ET iteration (6) converges in walk-summable models for an arbitrary
choice of subgraphs {Sn }?
n=1 . Before proceeding with the analysis, we point out that one potential
complication with the ET algorithm is that the matrix JS corresponding to some subgraph S may be
indefinite or singular, even if the original model J is positive-definite. Importantly, such a problem
never arises in walk-summable models with JS being positive-definite for any subgraph S if J is
walk-summable. This is easily seen because walks in the subgraph S are a subset of the walks
in G, and thus if absolute walk-sums in G are well-defined, then so are absolute walk-sums in S.
Therefore, JS is walk-summable, and hence, positive-definite.
Consider the following recursively defined set of walks for s, t ? V :
[
E\Sn (1)
Sn
Sn
Wn (s ? t) =
?u,v?V Wn?1 (s ? u) ? W(u ?? v) ? W(v ??
t)
W(s ??
t)
E\Sn (1)
S
n
= Wn?1 (s ? ?) ? W(? ?? ?) ? W(? ??
t)
[
S
n
W(s ??
t),
(7)
with W0 (s ? t) = ?. Here, ? and ? are used as wildcard characters (a union over all elements in V ),
and ? denotes concatenation of walk-sets as described previously. The set Wn?1 (s ? ?) denotes
walks that start at node s computed at the previous iteration. The middle term W(?
E\Sn (1)
??
?)
Sn
denotes a length-1 walk (called a hop) across an edge in E\Sn . Finally, W(? ?? t) denotes walks
in Sn that end at node t. Thus, the first term in (7) refers to previously computed walks starting at s,
which hop across an edge in E\Sn , and then finally propagate only in Sn (ending at t). The second
S
n
term W(s ??
t) denotes walks from s to t that only live within Sn . The following proposition
(proved in [10]) shows that the walks contained in these walk-sets are precisely those computed by
the ET algorithm at iteration n. For simplicity, we denote ?(Wn (s ? t)) by ?n (s ? t).
Proposition 1 Let x
b(n) be the estimate at iteration n in the ET algorithm (6) with initial guess
P
(n)
(0)
x
b = 0. Then, x
bt = ?n (h; ? ? t) = s?V hs ?n (s ? t) in walk-summable models.
We note that the classic Gauss-Jacobi algorithm [6], a stationary iteration with JS = I and KS = R,
(n)
can be interpreted as a walk-sum algorithm: x
bt in this method computes all walks up to length n
ending at t. Figure 1 gives an example of a walk-sum diagram, which provides a graphical representation of the walks accumulated by the walk-sets (7). The diagram is the three-level graph on the
right, and corresponds to an ET iteration based on the subgraphs S1 , S2 , S3 of the 3 ? 3 grid G (on
the left). Each level n in the diagram consists of the subgraph Sn used at iteration n (solid edges),
and information from the previous level (iteration) n ? 1 is transmitted through the dashed edges
in E\Sn . The directed nature of these dashed edges is critical as they capture the one-directional
flow of computations from iteration to iteration, while the undirected edges within each level capture
the inference computation at each iteration. Consider a node v at level n of the diagram. Walks in
the diagram that start at any node and end at v at level n, re-weighted by h, are exactly the walks
(n)
computed by the ET algorithm in x
bv . For more examples of such diagrams, see [10].
Given this walk-sum interpretation of the ET algorithm, we can analyze the walk-sets (7) to prove
the convergence of ET in walk-summable models by showing that the walk-sets eventually contain
all the walks required for the computation of J ?1 h in (5). We have the following convergence
theorem for which we only provide a brief sketch of the complete proof [10].
4
Theroem 1 Let x
b(n) be the estimate at iteration n in the ET algorithm (6) with initial guess x
b(0) =
0. Then, x
b(n) ? J ?1 h element-wise as n ? ? in walk-summable models.
Proof outline: Proving this statement is done in the following stages.
Validity: The walks in Wn are valid walks in G, i.e. Wn (s ? t) ? W(s ? t).
Nesting: The walk-sets Wn (s ? t) are nested, i.e. Wn?1 (s ? t) ? Wn (s ? t), ?n.
Completeness: Let w ? W(s ? t). There exists an N > 0 such that w ? WN (s ? t). Using the
nesting property, we conclude that for all n ? N , w ? Wn (s ? t).
These steps combined together allow us to conclude that ?n (s ? t) ? ?(s ? t) as n ? ?. This
conclusion relies on the fact that ?(Wn ) ? ?(?n Wn ) as n ? ? for a sequence of nested walk-sets
Wn?1 ? Wn in walk-summable models, which is a consequence of the sum-partition theorem for
absolutely summable series [5, 10, 12]. Given the walk-sum interpretation from Proposition 1, one
can check that x
b(n) ? J ?1 h element-wise as n ? ?.
Thus, the ET algorithm converges to the correct solution of (1) in walk-summable models for any
sequence of subgraphs with x
b(0) = 0. It is then straightforward to show that convergence can be
achieved for any initial guess [10]. Note that we have taken advantage of the absolute convergence
property in walk-summable models (3) by not focusing on the order in which walks are computed,
but only that they are eventually computed. In [10], we prove that walk-summability is also a
necessary condition for the complete flexibility in the choice of subgraphs ? there exists at least
one sequence of subgraphs that results in a divergent ET iteration in non-walk-summable models.
4
Adaptive algorithms
Let e(n) = x?b
x(n) be the error at iteration n and let h(n) = Je(n) = h?J x
b(n) be the corresponding
residual error (which is tractable to compute). We begin by describing an algorithm to choose the
?next-best? tree Sn in the ET iteration (6). The error at iteration n can be re-written as follows:
e(n) = (J ?1 ? JS?1
)h(n?1) .
n
G\Sn
(n)
Thus, we have the walk-sum interpretation et = ?(h(n?1) ; ? ?? t), where G\Sn denotes walks
that do not live entirely within Sn . Using this expression for the error, we have the following bound
that is tight for attractive models (Rs,t ? 0 for all s, t ? V ) and non-negative h(n?1) :
X
G\Sn
ke(n) k`1 =
|?(h(n?1) ; ? ?? t)|
t?V
? (n?1) |; G\Sn )
? ?(|h
? (n?1) |; G) ? ?(|h
? (n?1) |; Sn ).
= ?(|h
(8)
Hence, minimizing the error at iteration n corresponds to finding the tree Sn that maximizes the
? (n?1) |; Sn ). This leads us to the following maximum walk-sum tree problem:
second term ?(|h
? (n?1) |; Sn )
?(|h
(9)
arg max
Sn a tree
Finding the optimal such tree is combinatorially complex. Therefore, we develop a relaxation that
minimizes a looser upper bound than (8). Specifically, consider an edge {u, v} and all the walks that
live on this single edge W({u, v}) = {uv, vu, uvu, vuv, uvuv, vuvu, . . . }. One can check that the
contribution based on these single-edge walks can be computed as:
|R |
X
u,v
? (n?1) |; w) = |h(n?1) | + |h(n?1) |
?u,v =
?(|h
.
(10)
u
v
1 ? |Ru,v |
w?W({u,v})
This weight provides a measure of the error-reduction capacity of edge {u, v} by itself at iteration
n. These single-edge walks for edges in Sn are a subset of all the walks in Sn , and consequently
? (n?1) |; Sn ). Therefore, the maximization
provide a lower-bound on ?(|h
X
arg max
?u,v
(11)
Sn a tree
{u,v}?Sn
5
Figure 2: Grayscale images of residual errors in an 8 ? 8 grid at successive iterations, and corresponding trees chosen by adaptive method.
Figure 3: Grayscale images of residual errors in an 8 ? 8 grid at successive iterations, and corresponding hypertrees chosen by adaptive method.
is equivalent to minimizing a looser upper-bound than (8). This relaxed problem can be solved
efficiently using a maximum-weight spanning tree algorithm that has complexity O(|E| log log |V |)
for sparse graphs [13].
Given the maximum-weight spanning tree of the graph, a natural extension is to build a thin hypertree by adding extra ?strong? edges to the tree, subject to the constraint that the resulting graph has
low treewidth. Unfortunately, to do so optimally is an NP-hard optimization problem [14]. Hence,
we settle on a simple greedy algorithm. For each edge not included in the tree, in order of decreasing edge weight, we add the edge to the graph if two conditions are met: first, we are able to easily
verify that the treewidth stays less than M , and second, the length of the unique path in Sn between
the endpoints is less than L. In order to bound the tree width, we maintain a counter at each node
of the total number of added edges that result in a path through that node. Comparing to another
method for constructing junction trees from spanning trees [15], one can check that the maximum
node count is an upper-bound on the treewidth. We note that by using an appropriate directed representation of Sn relative to an arbitrary root, it is simple to identify the path between two nodes with
complexity linear in path length (< L).4 Hence, the additional complexity of this greedy algorithm
over that of the tree-selection procedure described previously is O(L|E|).
In Figure 2 and Figure 3 we present a simple demonstration of the tree and hypertree selection
procedures respectively, and the corresponding change in error achieved. The grayscale images
represent the residual errors at the nodes of an 8 ? 8 grid similar to G in Figure 1 (with white
representing 1 and black representing 0), and the graphs beside them show the trees/hypertrees
chosen based on these residual errors using the methods described above (the grid edge partial
correlation coefficients are the same for all edges). Notice that the first tree in Figure 2 tries to
include as many edges as possible that are incident on the nodes with high residual error. Such
edges are useful for capturing walks ending at the high-error nodes, which contribute to the set of
walks in (5). The first hypertree in Figure 3 actually includes all the edges incident on the higherror nodes. The residual errors after inference on these subgraphs are shown next in Figure 2 and
Figure 3. As expected, the hypertree seems to achieve greater reduction in error compared to the
spanning tree. Again, at this iteration, the subgraphs chosen by our methods adapt based on the
errors at the various nodes.
5
5.1
Experimental illustration
Walk-summable models
We test the adaptive algorithms on densely connected nearest-neighbor grid-structured models (similar to G in Figure 1). We generate random grid models ? the grid edge partial correlation coef4
One sets two pointers into the tree starting from any two nodes and then iteratively walks up the tree, always
advancing from the point that is deeper in the tree, until the nearest ancestor of the two nodes is reached.
6
Figure 4: (Left) Average number of iterations required for the normalized residual to reduce by a
factor of 10?6 over 100 randomly generated 75 ? 75 grid models; (Center) Convergence plot for a
randomly generated 511 ? 511 grid model; (Right) Convergence range in terms of partial correlation
for 16-node cyclic model with edges to neighbors two steps away.
Figure 5: (Left) 16-node graphical model; (Right) two embedded spanning trees T1 , T2 .
? = 0.99. The vector h is
ficients are chosen uniformly from [?1, 1] and R is scaled so that %(R)
chosen to be the all-ones vector. The table on the left in Figure 4 shows the average number of
(n)
k2
iterations required by various algorithms to reduce the normalized residual error kh
by a factor
kh(0) k2
?6
of 10 . The average was computed based on 100 randomly generated 75 ? 75 grid models. The
plot in Figure 4 shows the decrease in the normalized residual error as a function of the number of
iterations on a randomly generated 511 ? 511 grid model. All these models are poorly conditioned
? = 0.99). The stationary one-tree iteration uses a tree
because they are barely walk-summable (%(R)
similar to S1 in Figure 1, and the two-tree iteration alternates between trees similar to S1 and S3 in
Figure 1 [4]. The adaptive hypertree method uses M = 6 and L = 8. We also note that in practice
the per-iteration costs of the adaptive tree and hypertree algorithms are roughly comparable.
These results show that our adaptive algorithms demonstrate significantly superior convergence
properties compared to stationary methods, thus providing a convergent, computationally attractive method for estimation in walk-summable models. Our methods are applicable beyond Gaussian
estimation to other problems that require solution of linear systems based on sparse, symmetric,
positive-definite matrices. Several recent papers that develop machine learning algorithms are based
on solving such systems of equations [8, 9]; in fact, both of these papers involve linear systems
based on diagonally-dominant matrices, which are walk-summable.
5.2
Non-walk-summable models
Next, we give empirical evidence that our adaptive methods provide convergence over a broader
range of models than stationary iterations. One potential complication in non-walk-summable models is that the subgraph models chosen by the stationary and adaptive algorithms may be indefinite
or singular even though J is positive-definite. In order to avoid this problem in the adaptive ET
algorithm, the trees Sn chosen at each iteration must be valid (i.e., have positive-definite JSn ). A
simple modification to the maximum-weight spanning tree algorithm achieves this goal ? we add
an extra condition to the algorithm to test for diagonal dominance of the chosen tree model (as
all symmetric, diagonally-dominant models are positive definite [6]). That is, at each step of the
maximum-weight spanning tree algorithm, we only add an edge if it does not create a cycle and
maintains a diagonally-dominant tractable subgraph model. Consider the 16-node model on the left
in Figure 5. Let all the edge partial correlation coefficients be r. The range of r for which J is
positive-definite is roughly (?0.46, 0.25), and the range for which the model is walk-summable is
(?0.25, 0.25) (in this range all the algorithms, both stationary and adaptive, converge). For the onetree iteration we use tree T1 , and for the two-tree iteration we alternate between trees T1 and T2 (see
7
Figure 5). As the table on the right in Figure 4 demonstrates, the adaptive tree algorithm without the
diagonal-dominance (DD) check provides convergence over a much broader range of models than
the one-tree and two-tree iterations, but not for all valid models. However, the modified adaptive
tree algorithm with the DD check appears to converge almost up to the validity threshold. We have
also observed such behavior empirically in many other (though not all) non-walk-summable models
where the adaptive ET algorithm with the DD condition converges while stationary methods diverge.
Thus, our adaptive methods, compared to stationary iterations, not only provide faster convergence
rates in walk-summable models but also converge for a broader class of models.
6
Discussion
We analyze non-stationary iterations of the ET algorithm that use any sequence of subgraphs for
estimation in Gaussian graphical models. Our analysis is based on the recently developed walk-sum
interpretation of inference in Gaussian models, and we show that the ET algorithm converges for
any sequence of subgraphs used in walk-summable models. These convergence results motivate
the development of methods to choose subgraphs adaptively at each iteration to achieve maximum
reduction in error. The adaptive procedures are based on walk-sum calculations, and minimize an
upper bound on the error at each iteration. Our simulation results show that the adaptive algorithms
provide a significant speedup in convergence over stationary methods. Moreover, these adaptive
methods also seem to provide convergence over a broader class of models than stationary algorithms.
Our adaptive algorithms are greedy in that they only choose the ?next-best? subgraph. An interesting question is to develop tractable methods to compute the next K best subgraphs jointly to achieve
maximum reduction in error after K iterations. The experiment with non-walk-summable models
suggests that walk-sum analysis could be useful to provide convergent algorithms for non-walksummable models, perhaps with restrictions on the order in which walk-sums are computed. Finally, subgraph preconditioners have been shown to improve the convergence rate of the conjugategradient method; using walk-sum analysis to select such preconditioners is of clear interest.
References
[1] M. Luettgen, W. Carl, and A. Willsky. Efficient multiscale regularization with application to optical flow.
IEEE Transactions on Image Processing, 3(1):41?64, Jan. 1994.
[2] P. Rusmevichientong and B. Van Roy. An Analysis of Turbo Decoding with Gaussian densities. In
Advances in Neural Information Processing Systems 12, 2000.
[3] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kauffman, San Mateo, CA, 1988.
[4] E. Sudderth, M. Wainwright, and A. Willsky. Embedded Trees: Estimation of Gaussian processes on
graphs with cycles. IEEE Transactions on Signal Processing, 52(11):3136?3150, Nov. 2004.
[5] D. Malioutov, J. Johnson, and A. Willsky. Walk-Sums and Belief Propagation in Gaussian Graphical
Models. Journal of Machine Learning Research, 7:2031?2064, Oct. 2006.
[6] R. Varga. Matrix Iterative Analysis. Springer-Verlag, New York, 2000.
[7] R. Bru, F. Pedroche, and D. Szyld. Overlapping Additive and Multiplicative Schwarz iterations for Hmatrices. Linear Algebra and its Applications, 393:91?105, Dec. 2004.
[8] D. Zhou, J. Huang, and B. Scholkopf. Learning from Labeled and Unlabeled data on a directed graph. In
Proceedings of the 22nd International Conference on Machine Learning, 2005.
[9] D. Zhou and C. Burges. Spectral Clustering and Transductive Learning with multiple views. In Proceedings of the 24th International Conference on Machine Learning, 2007.
[10] V. Chandrasekaran, J. Johnson, and A. Willsky. Estimation in Gaussian Graphical Models using Tractable
Subgraphs: A Walk-Sum Analysis. To appear in IEEE Transactions on Signal Processing.
[11] D. Malioutov, J. Johnson, and A. Willsky. GMRF variance approximation using spliced wavelet bases. In
IEEE International Conference on Acoustics, Speech and Signal Processing, 2007.
[12] R. Godement. Analysis I: Convergence, Elementary Functions. Springer-Verlag, New York, 2004.
[13] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, 2001.
[14] N. Srebro. Maximum Likelihood Markov Networks: An Algorithmic Approach. Master?s thesis, Massachusetts Institute of Technology, 2000.
[15] F. Kschischang, B. Frey, and H. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498?519, Feb. 2001.
8
| 3275 |@word h:2 briefly:1 middle:1 seems:1 nd:1 simulation:2 r:2 propagate:1 covariance:2 thereby:1 solid:1 recursively:1 reduction:6 initial:3 cyclic:3 series:1 loeliger:1 comparing:1 written:1 must:1 additive:1 partition:1 plot:2 update:2 stationary:23 greedy:4 guess:3 beginning:1 walksummable:1 pointer:1 provides:4 completeness:1 node:22 complication:2 successive:3 contribute:1 along:2 scholkopf:1 consists:1 prove:2 compose:1 manner:1 pairwise:2 expected:1 behavior:1 roughly:2 decreasing:1 positivedefinite:1 begin:1 estimating:1 underlying:1 moreover:1 maximizes:1 rivest:1 interpreted:1 minimizes:1 developed:6 finding:2 transformation:1 exactly:1 prohibitively:1 scaled:1 k2:2 demonstrates:1 appear:3 positive:9 before:1 engineering:1 t1:3 frey:1 consequence:1 analyzing:1 path:4 black:1 k:8 mateo:1 suggests:2 range:6 directed:3 unique:1 vu:1 union:2 practice:1 definite:9 procedure:5 jan:1 area:1 empirical:2 significantly:1 word:1 refers:1 unlabeled:1 selection:2 live:3 optimize:2 restriction:2 equivalent:2 center:1 straightforward:1 starting:3 ke:1 simplicity:1 splitting:1 gmrf:1 subgraphs:24 nesting:2 importantly:1 classic:1 proving:1 play:1 exact:1 carl:1 us:2 element:5 crossing:1 roy:1 labeled:1 observed:1 role:1 electrical:1 solved:2 capture:2 thousand:1 cycle:4 connected:1 spliced:1 ordering:1 counter:1 decrease:1 complexity:4 motivate:1 solving:4 tight:1 algebra:1 easily:4 various:2 effective:2 describe:3 choosing:1 larger:1 solve:1 relax:1 transductive:1 transform:1 itself:1 jointly:1 sequence:15 advantage:2 product:2 subgraph:14 flexibility:2 achieve:4 poorly:1 description:1 kh:2 convergence:22 converges:7 develop:6 nearest:2 minor:1 strong:1 solves:2 involves:2 treewidth:5 implies:2 met:1 radius:1 correct:1 stochastic:1 settle:1 require:1 hypertrees:2 proposition:3 elementary:1 extension:1 exp:1 great:1 algorithmic:1 achieves:1 jx:2 estimation:14 applicable:2 schwarz:1 combinatorially:1 create:1 weighted:2 mit:4 sensor:1 gaussian:18 always:1 rwi:1 modified:1 rather:1 avoid:1 zhou:2 broader:4 focus:3 check:5 likelihood:1 inference:7 accumulated:1 bt:2 ancestor:1 arg:2 development:1 never:1 hop:2 represents:4 broad:3 thin:2 report:1 np:1 t2:2 intelligent:1 randomly:4 densely:1 maintain:1 vuv:1 freedom:1 interest:1 leiserson:1 analyzed:1 edge:35 partial:8 necessary:1 traversing:1 tree:55 walk:125 re:4 maximization:1 cost:1 vertex:8 entry:4 subset:3 hundred:1 johnson:4 optimally:1 combined:1 adaptively:3 density:1 international:3 stay:1 probabilistic:1 decoding:1 diverge:2 invertible:1 together:2 again:1 thesis:1 luettgen:1 containing:1 choose:4 summable:30 huang:1 leading:3 potential:2 unordered:1 rusmevichientong:1 includes:1 coefficient:7 explicitly:1 multiplicative:1 view:3 jason:1 root:1 try:1 analyze:5 reached:1 start:2 offdiagonal:1 maintains:1 contribution:1 minimize:2 variance:2 efficiently:1 identify:1 directional:1 malioutov:2 failure:1 proof:3 jacobi:1 proved:1 massachusetts:2 knowledge:1 actually:1 appears:2 focusing:1 specify:1 done:1 though:2 stage:1 correlation:9 until:1 sketch:1 multiscale:1 reweighting:1 propagation:2 overlapping:1 perhaps:1 building:1 validity:2 concept:2 verify:1 contain:1 normalized:5 hence:4 regularization:1 assigned:1 symmetric:3 iteratively:1 white:1 attractive:4 width:1 generalized:1 outline:1 complete:3 demonstrate:2 performs:1 reasoning:1 image:4 wise:3 recently:4 superior:1 empirically:1 overview:1 endpoint:1 interpretation:10 interpret:1 significant:3 uv:1 grid:12 longer:1 base:1 add:4 dominant:5 j:15 feb:1 recent:1 verlag:2 seen:1 transmitted:1 additional:1 relaxed:1 greater:1 morgan:1 converge:6 signal:4 u0:2 dashed:2 multiple:2 reduces:1 alan:1 faster:1 adapt:1 calculation:5 offer:2 involving:2 iteration:55 represent:1 achieved:3 dec:1 background:1 diagram:8 singular:2 sudderth:1 appropriately:1 extra:3 rest:1 subject:1 undirected:1 conjugategradient:1 flow:2 seem:1 split:1 wn:16 variety:1 affect:1 reduce:2 expression:1 speech:1 york:2 generally:3 useful:3 detailed:1 involve:1 clear:1 varga:1 stein:1 generate:1 s3:3 notice:1 per:1 broadly:1 dominance:2 independency:1 key:1 indefinite:2 threshold:1 achieving:1 ht:1 advancing:1 v1:2 graph:21 relaxation:1 sum:41 inverse:1 master:1 throughout:1 chandrasekaran:2 almost:1 looser:2 scaling:1 comparable:1 entirely:1 bound:8 capturing:1 convergent:4 encountered:1 turbo:1 bv:1 constraint:2 precisely:1 normalizable:2 preconditioners:2 performing:1 optical:1 speedup:3 department:1 structured:2 developing:1 according:3 alternate:2 cormen:1 across:2 character:2 wi:4 modification:2 s1:4 taken:1 computationally:2 equation:2 previously:3 discus:1 turn:1 eventually:2 describing:1 count:1 tractable:8 end:2 junction:3 operation:1 apply:1 v2:2 spectral:2 appropriate:1 away:1 jd:1 original:2 denotes:10 clustering:1 include:1 graphical:12 build:1 implied:1 added:1 question:1 diagonal:9 said:1 link:1 concatenation:5 capacity:1 w0:1 considers:1 spanning:8 barely:1 willsky:7 assuming:1 ru:1 length:9 illustration:1 providing:2 minimizing:2 demonstration:1 unfortunately:1 hypertree:7 statement:1 negative:2 countable:1 perform:1 upper:5 markov:3 finite:1 communication:1 arbitrary:9 inverting:1 pair:3 required:3 acoustic:1 pearl:1 beyond:2 able:1 kauffman:1 sparsity:1 including:2 max:2 belief:2 wainwright:1 critical:2 natural:3 residual:11 recursion:1 representing:2 improve:1 technology:2 brief:2 sn:35 relative:1 embedded:10 beside:1 summability:6 interesting:1 srebro:1 degree:1 incident:2 sufficient:1 szyld:1 dd:3 diagonally:5 allow:1 deeper:1 burges:1 institute:2 neighbor:2 absolute:7 sparse:4 distributed:1 van:1 ending:5 valid:4 rich:1 computes:1 collection:1 adaptive:23 commonly:1 san:1 transaction:4 nov:1 compact:1 conclude:2 grayscale:3 un:1 iterative:5 table:2 nature:1 composing:1 ca:1 kschischang:1 complex:2 separator:1 constructing:1 submit:1 s2:2 je:1 venkat:1 cubic:1 jsn:1 wavelet:1 theorem:2 xt:3 specific:1 jt:1 showing:2 divergent:1 evidence:2 intractable:4 exists:2 bru:1 adding:1 conditioned:2 ordered:2 contained:1 springer:2 corresponds:3 nested:2 satisfies:1 frustrated:1 relies:1 oct:1 conditional:2 goal:1 formulated:1 consequently:2 change:2 hard:1 included:1 specifically:2 uniformly:1 called:5 ksn:1 total:1 gauss:1 experimental:1 wildcard:2 select:1 ficients:1 arises:1 absolutely:1 tested:1 phenomenon:1 |
2,509 | 3,276 | Hidden Common Cause Relations in
Relational Learning
Ricardo Silva?
Gatsby Computational Neuroscience Unit
UCL, London, UK WC1N 3AR
rbas@gatsby.ucl.ac.uk
Wei Chu
Center for Computational Learning Systems
Columbia University, New York, NY 10115
chuwei@cs.columbia.edu
Zoubin Ghahramani
Department of Engineering
University of Cambridge, UK CB2 1PZ
zoubin@eng.cam.ac.uk
Abstract
When predicting class labels for objects within a relational database, it is often
helpful to consider a model for relationships: this allows for information between
class labels to be shared and to improve prediction performance. However, there
are different ways by which objects can be related within a relational database.
One traditional way corresponds to a Markov network structure: each existing
relation is represented by an undirected edge. This encodes that, conditioned on
input features, each object label is independent of other object labels given its
neighbors in the graph. However, there is no reason why Markov networks should
be the only representation of choice for symmetric dependence structures. Here
we discuss the case when relationships are postulated to exist due to hidden common causes. We discuss how the resulting graphical model differs from Markov
networks, and how it describes different types of real-world relational processes.
A Bayesian nonparametric classification model is built upon this graphical representation and evaluated with several empirical studies.
1 Contribution
Prediction problems, such as classification, can be easier when class labels share a sort of relational
dependency that is not accounted by the input features [10]. If the variables to be predicted are attributes of objects in a relational database, such dependencies are often postulated from the relations
that exist in the database. This paper proposes and evaluates a new method for building classifiers
that uses information concerning the relational structure of the problem.
Consider the following standard example, adapted from [3]. There are different webpages, each
one labeled according to some class (e.g., ?student page? or ?not a student page?). Features such
as the word distribution within the body of each page can be used to predict each webpage?s class.
However, webpages do not exist in isolation: there are links connecting them. Two pages having a
common set of links is evidence for similarity between such pages. For instance, if W1 and W3 both
link to W2 , this is commonly considered to be evidence for W1 and W3 having the same class. One
way of expressing this dependency is through the following Markov network [5]:
?
Now at the Statistical Laboratory, University of Cambridge. E-mail: silva@statslab.cam.ac.uk
F1
F2
F3
C1
C2
C3
Here Fi are the features of page Wi , and Ci is its respective page label. Other edges linking F
variables to C variables (e.g., F1 ?C2 ) can be added without affecting the main arguments presented
in this section. The semantics of the graph, for a fixed input feature set {F1 , F2 , F3 }, are as follows:
C1 is marginally dependent on C3 , but conditionally independent given C2 . Depending on the
domain, this might be either a suitable or unsuitable representation of relations. For instance, in some
domains it could be the case that the most sensible model would state that C1 is only informative
about C3 once we know what C2 is: that is, C1 and C3 are marginally independent, but dependent
given C2 . This can happen if the existence of a relation (Ci , Cj ) corresponds to the existence of
hidden common causes generating this pair of random variables.
Consider the following example, loosely based on a problem described by [12]. We have three
objects, Microsoft (M ), Sony (S) and Philips (P ). The task is a regression task where we want
to predict the stock market price of each company given its profitability from last year. The given
relationships are that M and S are direct competitors (due to the videogame console market), as
well S and P (due to the TV set market).
M.Profit
S.Profit
P.Profit
M.Profit
S.Profit
P.Profit
M.Profit
S.Profit
P.Profit
M.Stock
S.Stock
P.Stock
M.Stock
S.Stock
P.Stock
M.Stock
S.Stock
P.Stock
?m
?s
?p
?m
?s
?p
(a)
(b)
(c)
Figure 1: (a) Assumptions that relate Microsoft, Sony and Philips stock prices through hidden common cause mechanisms, depicted as unlabeled gray vertices; (b) A graphical representation for
generic hidden common causes relationships by using bi-directed edges; (c) A depiction of the same
relationship skeleton by a Markov network model, which has different probabilistic semantics.
It is expected that several market factors that affect stock prices are unaccounted by the predictor
variable Past Year Profit. For example, a shortage of Microsoft consoles is a hidden common factor for both Microsoft?s and Sony?s stock. Another hidden common cause would be a high price
for Sony?s consoles. Assume here that these factors have no effect on Philips? stock value. A depiction of several hidden common causes that correpond to the relations Competitor(M, S) and
Competitor(S, P ) is given in Figure 1(a) as unlabeled gray vertices.
Consider a linear regression model for this setup. We assume that for each object Oi ? {M, S, P },
the stock price Oi .Stock, centered at the mean, is given by
Oi .Stock = ? ? Oi .P rof it + i
(1)
where each i is a Gaussian random variable.
The fact that there are several hidden common causes between M and S can be modeled by the
covariance of m and s , ?ms . That is, unlike in standard directed Gaussian models, ?ms is allowed
to be non-zero. The same holds for ?sp . Covariances of error terms of unrelated objects should
be zero (?mp = 0). This setup is very closely related to the classic seemingly unrelated regression
model popular in economics [12].
A graphical representation for this type of model is the directed mixed graph (DMG) [9, 11], with
bi-directed edges representing the relationship of having hidden common causes between a pair
of vertices. This is shown in Figure 1(b). Contrast this to the Markov network representation in
Figure 1(c). The undirected representation encodes that m and p are marginally dependent, which
does not correspond to our assumptions1 . Moreover, the model in Figure 1(b) states that once
we observe Sony?s stock price, Philip?s stocks (and profit) should have a non-zero association with
Microsoft?s profit: this follows from a extension of d-separation to DMGs [9]. This is expected from
the assumptions (Philip?s stocks should tell us something about Microsoft?s once we know Sony?s,
but not before it), but does not hold in the graphical model in Figure 1(c). While it is tempting
to use Markov networks to represent relational models (free of concerns raised by cyclic directed
representations), it is clear that there are problems for which they are not a sensible choice.
This is not to say that Markov networks are not the best representation for large classes of relational
problems. Conditional random fields [4] are well-motivated Markov network models for sequence
learning. The temporal relationship is closed under marginalization: if we do not measure some steps
in the sequence, we will still link the corresponding remaining vertices accordingly, as illustrated in
Figure 2. Directed mixed graphs are not a good representation for this sequence structure.
X1
X2
X3
X4
X5
Y1
Y2
Y3
Y4
Y5
X1
X2
X3
X4
X5
Y1
Y2
Y3
Y4
Y5
(a)
(b)
X1
X3
X5
Y1
Y3
Y5
(c)
Figure 2: (a) A conditional random field (CRF) graph for sequence data; (b) A hypothetical scenario
where two of the time slices are not measured, as indicated by dashed boxes; (c) The resulting CRF
graph for the remaining variables, which corresponds to the same criteria for construction of (a).
To summarize, the decision between using a Markov network or a DMG reduces to the following
modeling issue: if two unlinked object labels yi , yj are statistically associated when some chain
of relationships exists between yi and yj , then the Markov network semantics should apply (as in
the case for temporal relationships). However, if the association arises only given the values of the
other objects in the chain, then this is accounted by the dependence semantics of the directed mixed
graph representation. The DMG representation propagates training data information through other
training points. The Markov network representation propagates training data information through
test points. Propagation through training points is relevant in real problems. For instance, in a
webpage domain where each webpage has links to pages of several kinds (e.g., [3]), a chain of
intermediated points between two classes labels yi and yj is likely to be more informative if we
know the values of the labels in this chain. The respective Markov network would ignore all training
points in this chain besides the endpoints.
In this paper, we introduce a non-parametric classification model for relational data that factorizes
according to a directed mixed graph. Sections 2 and 3 describes the model and contrasts it to a
closely related approach which bears a strong analogy to the Markov network formulation. Experiments in text classification are described in Section 4.
2 Model
Chu et al. [2] describe an approach for Gaussian process classification using relational information,
which we review and compare to our proposed model.
Previous approach: relational Gaussian processes through indicators ? For each point x
in the input space X , there is a corresponding function value fx . Given observed input points
x1 , x2 , . . . , xn , a Gaussian process prior over f = [f1 , f2 , . . . , fn ]T has the shape
P(f ) =
1
(2?)n/2 |?|1/2
1 T ?1
exp ? f ? f
2
(2)
1
For Gaussian models, the absence of an edge in the undirected representation (i.e., Gaussian Markov
random fields) corresponds to a zero entry in the inverse covariance matrix, where in the DMG it corresponds
to a zero in the covariance matrix [9].
X1
X2
f1
f2
? 12
X3
X1
X2
X3
X1
X2
X3
f3
f1
f2
f3
f1
f2
f3
Y2
Y3
? 23
Y1
Y2
Y3
Y1
Y2
Y3
Y1
?1
?2
?3
?1
?2
?3
?1
(a)
?1
(b)
?2
?2
?3
?3
(c)
Figure 3: (a) A prediction problem where y3 is unknown and the training set is composed of other
two datapoints. Dependencies between f1 , f2 and f3 are given by a Gaussian process prior and not
represented in the picture. Indicators ?ij are known and set to 1; (b) The extra associations that
arise by conditioning on ? = 1 can be factorized as the Markov network model here depicted, in the
spirit of [9]; (c) Our proposed model, which ties the error terms and has origins in known statistical
models such as seemingly unrelated regression and structural equation models [11].
where the ijth entry of ? is given by a Mercer kernel function K(xi , xj ) [8].
The idea is to start from a standard Gaussian process prior, and add relational information by conditioning on relational indicators. Let ?ij be an indicator that assumes different values, e.g., 1 or 0.
The indicator values are observed for each pair of data points (xi , xj ): they are an encoding of the
given relational structure. A model for P (?ij = 1|fi , fj ) is defined. This evidence is incorporated
into the Gaussian process by conditioning on all indicators ?ij that are positive. Essentially, the idea
boils down to using P(f |? = 1) as the prior for a Gaussian process classifier. Figure 3(a) illustrates a problem with datapoints {(x1 , y1 ), (x2 , y2 ), (x3 , y3 )}. Gray vertices represent unobserved
variables. Each yi is a binary random variable, with conditional probability given by
P(yi = 1|fi ) = ?(fi /?)
(3)
where ?(?) is the standard normal cumulative function and ? is a hyperparameter. This can be
interpreted as the cumulative distribution of fi + i , where fi is given and i is a normal random
variable with zero mean and variance ? 2 .
In the example of Figure 3(a), one has two relations: (x1 , x2 ), (x2 , x3 ). This information is incorporated by conditioning on the evidence (?12 = 1, ?23 = 1). Observed points (x1 , y1 ), (x2 , y2 ) form
the training set. The prediction task is to estimate y3 . Notice that ?12 is not used to predict y3 : the
Markov blanket for f3 includes (f1 , f2 , ?23 , y3 , 3 ) and the input features. Essentially, conditioning
on ? = 1 corresponds to a pairwise Markov network structure, as depicted in Figure 3(b) [9]2 .
Our approach: mixed graph relational model ? Figure 3(c) illustrates our proposed setup. For
reasons that will become clear in the sequel, we parameterize the conditional probability of yi as
?
(4)
P(yi = 1|gi , vi ) = ?(gi / vi )
where gi = fi + ?i . As before, Equation (4) can be interpreted as the cumulative distribution of
gi + ?i , with ?i as a normal random variable with zero mean and variance vi = ? 2 ? ??2i , the last
term being the variance of ?i . That is, we break the original error term as i = ?i + ?i , where ?i
and ?j are independent for all i 6= j. Random vector ? is a multivariate normal with zero mean and
covariance matrix ?? . The key aspect in our model is that the covariance of ?i and ?j is non-zero
only if objects i and j are related (that is, bi-directed edge yi ? yj is in the relational graph).
Parameterizing ?? for relational problems is non-trivial and discussed in the next section.
In the example of Figure 3, one noticeable difference of our model 3(c) to a standard Markov network
models 3(b) is that now the Markov blanket for f3 includes error terms for all variables (both and
? terms), following the motivation presented in Section 1.
2
In the figure, we are not representing explicitly that f1 , f2 and f3 are not independent (the prior covariance matrix ? is complete). The figure is meant as a representation of the extra associations that arise when
conditioning on ? = 1, and the way such associations factorize.
As before, the prior for f in our setup is the Gaussian process prior (2). This means that g has the
following Gaussian process prior (implicitly conditioned on x):
P(g) =
1
(2?)n/2 |R|1/2
1
exp ? g> R?1 g
2
(5)
where R = K + ?? is the covariance matrix of g = f + ?, with Kij = K(xi , xj ).
3 Parametrizing a mixed graph model for relational classification
For simplicity, in this paper we will consider only relationships that induce positive associations
between labels. Ideally, the parameterization of ?? has to fulfill two desiderata: (i). it should respect
the marginal independence constraints as encoded by the graphical model (i.e., zero covariance for
vertices that are not adjacent), and be positive definite; (ii). it has to be parsimonious in order to
facilitate hyperparameter selection, both computationally and statistically. Unlike the multivariate
analysis problems in [11], the size of our covariance matrix grows with the number of data points.
As shown by [11], exact inference in models with covariance matrices with zero-entry constraints is
computationally demanding. We provide two alternative parameterizations that are not as flexible,
but which lead to covariance matrices that are simple to compute and easy to implement. We will
work under the transductive scenario, where training and all test points are given in advance. The
corresponding graph thus contain unobserved and observed label nodes.
3.1 Method I
The first method is an automated method to relax some of the independence constraints, while
guaranteeing positive-definiteness, and a parameterization that depends on a single scalar ?. This
allows for more efficient inference and is done as follows:
1. Let G? be the corresponding bi-directed subgraph of our original mixed graph, and let U0
be a matrix with n ? n entries, n being the number of nodes in G?
2. Set U0ij to be the number of cliques in G? where yi and yj appear together;
3. Set U0ii to be the number of cliques containing yi , plus a small constant ?;
4. Set U to be the corresponding correlation matrix obtained by intepreting U0 as a covariance
matrix and rescaling it;
Finally, set ?? = ?U, where ? ? [0, 1] is a given hyperparameter. Matrix U is always guaranteed to
be positive definite: it is equivalent to obtaining the covariance matrix of y from a linear latent variable model, where there is an independent standard Gaussian latent variable as a common parent to
every clique, and every observed node yi is given by the sum of its parents plus an independent error
term of variance ?. Marginal independencies are respected, since independent random variables
will never be in a same clique in G? . In practice, this method cannot be used as is since the number
of cliques will in general grow at an exponential rate as a function of n. Instead, we first triangulate
the graph: in this case, extracting cliques can be done in polynomial time. This is a relaxation of the
original goal, since some of the original marginal independence constraints will not be enforced due
to the triangulation3.
3.2 Method II
The method suggested in the previous section is appealing under the assumption that vertices that
appear in many common cliques are more likely to have more hidden common causes, and hence
should have stronger associations. However, sometimes the triangulation introduces bad artifacts,
with lots of marginal independence constraints being violated. In this case, this will often result in
a poor prediction performance. A cheap alternative approach is not generating cliques, and instead
3
The need for an approximation is not a shortcoming only of the DMG approach. Notice that the relational
Gaussian process of [2] also requires an approximation of its relational kernel.
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60
60
70
70
70
80
80
80
90
90
90
100
100
100
10
20
30
40
50
60
(a)
70
80
90
100
10
20
30
40
50
60
70
80
90
(b)
100
10
20
30
40
50
60
70
80
90
100
(c)
Figure 4: (a) The link matrix for the political books dataset. (b) The relational kernel matrix obtained
with the approximated Method I. (c) The kernel matrix obtained with Method II, which tends to
produce much weaker associations but does not introduce spurious relations.
getting a marginal covariance matrix from a different latent variable model. In this model, we
create an independent standard Gaussian variable for each edge yi ? yj instead of each clique. No
triangulation will be necessary, and all marginal independence constraints will be respected. This,
however, has shortcomings of its own: for all pairs (yi , yj ) connected by an edge, it will be the case
that U0ij = 1, while U0ii can be as large as n. This means that the resulting correlation in Uij can be
close to zero even if yi and yj are always in the same cliques. In Section 4, we will choose between
Methods I and II according to the marginal likelihood of the model.
3.3 Algorithm
Recall that our model is a Gaussian process classifier with error terms i of variance ? such that
i = ?i + ?i . Without loss of generality, we will assume that ? = 1. This results in the following
parameterization of the full error covariance matrix:
? = (1 ? ?)I + ?U
(6)
where I is an n ? n identity matrix. Matrix (1 ? ?)I corresponds to the covariance matrix ?? .
The usefulness of separating as ? and ? becomes evident when we use an expectation-propagation
(EP) algorithm [7] to perform inference in our relational classifier. Instead of approximating the
posterior of f , we approximate the posterior density P(g|D), D = {(x1 , y1 ), . . . , Q
(xn , yn )} being
the given training data. The approximate posterior has the form Q(g) ? P(g) i t?i (gi ) where
P(g) is the Gaussian process prior with kernel matrix R = K + ?? as defined in the previous
section. Since the covariance matrix
Qn ?? is diagonal, the true likelihood of y given g factorizes
over each datapoint: P(y|g) = i=1 P(yi |gi ), and standard EP algorithms for Gaussian process
classification can be used [8] (with the variance given by ?? instead of ? , and kernel matrix R
instead of K).
The final algorithm defines a whole new class of relational models, depends on a single hyperparameter ? which can be optimized by grid search in [0, 1], and requires virtually no modification of
code written for EP-based Gaussian process classifiers4 .
4 Results
We now compare three different methods in relational classification tasks. We will compare a
standard Gaussian process classifier (GPC), the relational Gaussian process (RGP) of [2] and our
method, the mixed graph Gaussian process (XGP). A linear kernel K(x, z) = x ? z is used, as described by [2]. We set ? = 10?4 and the hyperparameter ? is found by a grid search in the space
{0.1, 0.2, 0.3, . . . , 1.0} maximizing the approximate EP marginal likelihood5.
4
We provide MATLAB/Octave code for our method in http://www.statslab.cam.ac.uk/?silva.
For triangulation, we used the MATLAB implementation of the Reverse Cuthill McKee vertex ordering
available at http://people.scs.fsu.edu/?burkardt/m src/rcm/rcm.html
5
Table 1: The averaged AUC scores of citation prediction on test cases of the Cora database are
recorded along with standard deviation over 100 trials. ?n? denotes the number of papers in one
class. ?Citations? denotes the citation count within the two paper classes.
Group
n
Citations
GPC
GPC with Citations
XGP
5vs1
346/488
2466
0.905 ? 0.031
0.891 ? 0.022
0.945 ? 0.053
5vs2
346/619
3417
0.900 ? 0.032
0.905 ? 0.044
0.933 ? 0.059
5vs3 346/1376
3905
0.863 ? 0.040
0.893 ? 0.017
0.883 ? 0.013
5vs4
346/646
2858
0.916 ? 0.030
0.887 ? 0.018
0.951 ? 0.042
5vs6
346/281
1968
0.887 ? 0.054
0.843 ? 0.076
0.955 ? 0.041
5vs7
346/529
2948
0.869 ? 0.045
0.867 ? 0.041
0.926 ? 0.076
4.1 Political books
We consider first a simple classification problem where the goal is to classify whether a particular book is of liberal political inclination or not. The features of each book are given
by the words in the Amazon.com front page for that particular book. The choice of books,
labels, and relationships are given in the data collected by Valdis Krebs and available at
http://www-personal.umich.edu/ mejn/netdata. The data containing book features can be found at
http://www.statslab.cam.ac.uk/?silva. There are 105 books, 43 of which are labeled as liberal books.
The relationships are pairs of books which are frequently purchased together by a same customer.
Notice this is an easy problem, where labels are strongly associated if they share a relationship.
We performed evaluation by sampling 100 times from the original pool of books, assigning half of
them as trainining data. The evaluation criterion was the area under the curve (AUC) for this binary
problem. This is a problem where Method I is suboptimal. Figure 4(a) shows the original binary
link matrix. Figure 4(b) depicts the corresponding U0 matrix obtained with Method I, where entries
closer to red correspond to stronger correlations. Method II gives a better performance here (Method
I was better in the next two experiments). The AUC result for GPC was of 0.92, while both RGP
and XGP achieved 0.98 (the difference between XGP and GPC having a std. deviation of 0.02).
4.2 Cora
The Cora collection [6] contains over 50,000 computer science research papers including bibliographic citations. We used a subset in our experiment. The subset consists of 4,285 machine learning
papers categorized into 7 classes. The second column of Table 1 shows the class sizes. Each paper
was preprocessed as a bag-of-words, a vector of ?term frequency? components scaled by ?inverse
document frequency?, and then normalized to unity length. This follows the pre-processing used in
[2]. There is a total of 20,082 features. For each class, we randomly selected 1% of the labelled
samples for training and tested on the remainder. The partition was repeated 100 times. We used
the fact that the database is composed of fairly specialized papers as an illustration of when XGP
might not be as optimal as RGP (whose AUC curves are very close to 1), since the population of
links tends to be better separated between different classes (but this is also means that the task is
fairly easy, and differences disappear very rapidly with increasing sample sizes). The fact there is
very little training data also favors RGP, since XGP propagates information through training points.
Still, XGP does better than the non-relational GPC. Notice that adding the citation adjacency matrix
as a binary input feature for each paper does not improve the performance of the GPC, as shown in
Table 1. Results for other classes are of similar qualitative nature and not displayed here.
4.3 WebKB
The WebKB dataset consists of homepages from 4 different universities: Cornell, Texas, Washington
and Wisconsin [3]. Each webpage belongs to one out of 7 categories: student, professor, course,
project, staff, department and ?other?. The relations come from actual links in the webpages. There
is relatively high heterogeneity of types of links in each page: in terms of mixed graph modeling,
this linkage mechanism is explained by a hidden common cause (e.g., a student and a course page
are associated because that person?s interest in enrolling as a student also creates demand for a
course). The heterogeneity also suggests that two unlinked pages should not, on average, have an
association if they link to a common page W . However, observing the type of page W might create
Table 2: Comparison of the three algorithms on the task ?other? vs. ?not-other? in the WebKB
domain. Results for GPC and RGP taken from [2]. The same partitions for training and test are used
to generate the results for XGP. Mean and standard deviation of AUC results are reported.
University
Numbers
Other or Not
Other
All
Link
GPC
RGP
XGP
Cornell
617
865 13177 0.708 ? 0.021 0.884 ? 0.025 0.917 ? 0.022
Texas
571
827 16090 0.799 ? 0.021 0.906 ? 0.026 0.949 ? 0.015
Washington
939
1205 15388 0.782 ? 0.023 0.877 ? 0.024 0.923 ? 0.016
Wisconsin
942
1263 21594 0.839 ? 0.014 0.899 ? 0.015 0.941 ? 0.018
the association. We compare how the three algorithms perform when trying to predict if a webpage
is of class ?other? or not (the other classifications are easier, with smaller differences. Results are
omitted for space purposes). The proportion of ?other? to non-?other? is about 4:1, which makes the
area under the curve (AUC) a more suitable measure of success. We used the same 100 subsamples
from [2], where 10% of the whole data is sampled from the pool for a specific university, and the
remaining is used for test. We also used the same features as in [2], pre-processed as described in the
previous section. The results are shown in Table 2. Both relational Gaussian processes are far better
than the non-relational GPC. XGP gives significant improvements over RGP in all four universities.
5 Conclusion
We introduced a new family of relational classifiers by extending a classical statistical model [12]
to non-parametric relational classification. This is inspired by recent advances in relational Gaussian processes [2] and Bayesian inference for mixed graph models [11]. We showed empirically
that modeling the type of latent phenomena that our approach postulates can sometimes improve
prediction performance in problems traditionally approached by Markov network structures.
Several interesting problems can be treated in the future. It is clear that there are many different ways
by which the relational covariance matrix can be parameterized. Intermediate solutions between
Methods I and II, approximations through matrix factorizations and graph cuts are only a few among
many alternatives that can be explored. Moreover, there is a relationship between our model and
multiple kernel learning [1], where one of the kernels comes from error covariances. This might
provide alternative ways of learning our models, including multiple types of relationships.
Acknowledgements: We thank Vikas Sindhwani for the preprocessed Cora database.
References
[1] F. Bach, G. Lanckriet, and M. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm.
21st International Conference on Machine Learning, 2004.
[2] W. Chu, V. Sindhwani, Z. Ghahramani, and S. Keerthi. Relational learning with Gaussian processes.
Neural Information Processing Systems, 2006.
[3] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery. Learning to
extract symbolic knowledge from the World Wide Web. Proceedings of AAAI?98, pages 509?516, 1998.
[4] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. 18th International Conference on Machine Learning, 2001.
[5] S. Lauritzen. Graphical Models. Oxford University Press, 1996.
[6] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the construction of Internet portals
with machine learning. Information Retrieval Journal, 3:127?163, 2000.
[7] T. Minka. A family of algorithms for approximate Bayesian inference. PhD Thesis, MIT, 2001.
[8] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[9] T. Richardson and P. Spirtes. Ancestral graph Markov models. Annals of Statistics, 30:962?1030, 2002.
[10] P. Sen and L. Getoor. Link-based classification. Report CS-TR-4858, University of Maryland, 2007.
[11] R. Silva and Z. Ghahramani. Bayesian inference for Gaussian mixed graph models. UAI, 2006.
[12] A. Zellner. An efficient method of estimating seemingly unrelated regression equations and tests for
aggregation bias. Journal of the American Statistical Association, 1962.
| 3276 |@word trial:1 polynomial:1 stronger:2 proportion:1 eng:1 covariance:20 profit:12 tr:1 cyclic:1 contains:1 score:1 bibliographic:1 document:1 past:1 existing:1 com:1 assigning:1 chu:3 written:1 fn:1 happen:1 informative:2 partition:2 shape:1 cheap:1 v:1 half:1 selected:1 parameterization:3 accordingly:1 mccallum:3 parameterizations:1 node:3 liberal:2 along:1 c2:5 direct:1 become:1 qualitative:1 consists:2 freitag:1 introduce:2 pairwise:1 expected:2 market:4 frequently:1 inspired:1 company:1 little:1 actual:1 increasing:1 becomes:1 project:1 webkb:3 unrelated:4 moreover:2 fsu:1 factorized:1 homepage:1 estimating:1 what:1 kind:1 interpreted:2 unobserved:2 temporal:2 y3:11 hypothetical:1 every:2 tie:1 classifier:6 scaled:1 uk:7 unit:1 appear:2 yn:1 segmenting:1 before:3 positive:5 engineering:1 tends:2 enrolling:1 encoding:1 oxford:1 might:4 plus:2 suggests:1 factorization:1 bi:4 statistically:2 averaged:1 directed:10 yj:8 practice:1 definite:2 differs:1 x3:8 cb2:1 implement:1 area:2 empirical:1 word:3 induce:1 pre:2 zoubin:2 symbolic:1 cannot:1 unlabeled:2 selection:1 close:2 www:3 equivalent:1 customer:1 center:1 maximizing:1 williams:1 economics:1 simplicity:1 amazon:1 parameterizing:1 datapoints:2 classic:1 population:1 fx:1 traditionally:1 annals:1 construction:2 exact:1 us:1 origin:1 lanckriet:1 approximated:1 std:1 cut:1 database:7 labeled:2 observed:5 ep:4 parameterize:1 connected:1 ordering:1 src:1 slattery:1 skeleton:1 ideally:1 cam:4 personal:1 upon:1 creates:1 f2:9 stock:20 represented:2 separated:1 describe:1 london:1 shortcoming:2 sc:1 tell:1 approached:1 labeling:1 whose:1 encoded:1 say:1 relax:1 rennie:1 favor:1 statistic:1 gi:6 richardson:1 transductive:1 final:1 seemingly:3 subsamples:1 sequence:5 ucl:2 sen:1 remainder:1 relevant:1 rcm:2 rapidly:1 subgraph:1 getting:1 webpage:8 parent:2 extending:1 zellner:1 produce:1 generating:2 guaranteeing:1 object:11 depending:1 ac:5 measured:1 ij:4 lauritzen:1 noticeable:1 strong:1 c:2 predicted:1 blanket:2 come:2 closely:2 attribute:1 centered:1 adjacency:1 f1:10 extension:1 hold:2 considered:1 normal:4 exp:2 netdata:1 predict:4 omitted:1 purpose:1 dmg:5 bag:1 label:13 create:2 cora:4 mit:2 gaussian:28 always:2 fulfill:1 cornell:2 factorizes:2 improvement:1 likelihood:2 contrast:2 political:3 rgp:7 helpful:1 inference:6 dependent:3 spurious:1 hidden:12 relation:9 uij:1 semantics:4 issue:1 classification:12 flexible:1 html:1 among:1 proposes:1 raised:1 fairly:2 marginal:8 field:4 once:3 f3:9 having:4 never:1 sampling:1 washington:2 x4:2 triangulate:1 future:1 report:1 few:1 randomly:1 composed:2 u0ii:2 keerthi:1 microsoft:6 interest:1 evaluation:2 introduces:1 wc1n:1 chain:5 edge:8 closer:1 necessary:1 respective:2 loosely:1 instance:3 column:1 kij:1 modeling:3 classify:1 ar:1 dipasquo:1 ijth:1 chuwei:1 vertex:8 subset:2 entry:5 deviation:3 predictor:1 usefulness:1 front:1 reported:1 dependency:4 person:1 density:1 st:1 international:2 automating:1 sequel:1 probabilistic:2 ancestral:1 pool:2 connecting:1 together:2 w1:2 thesis:1 aaai:1 postulate:1 recorded:1 containing:2 choose:1 book:11 american:1 ricardo:1 rescaling:1 student:5 includes:2 postulated:2 mp:1 explicitly:1 vi:3 depends:2 performed:1 break:1 lot:1 closed:1 vs2:1 observing:1 red:1 start:1 sort:1 aggregation:1 contribution:1 oi:4 unlinked:2 variance:6 correspond:2 bayesian:4 marginally:3 datapoint:1 evaluates:1 competitor:3 vs3:1 frequency:2 minka:1 associated:3 boil:1 sampled:1 dataset:2 popular:1 mejn:1 mitchell:1 recall:1 knowledge:1 cj:1 wei:1 formulation:1 evaluated:1 box:1 done:2 generality:1 strongly:1 profitability:1 correlation:3 web:1 vs6:1 propagation:2 defines:1 artifact:1 gray:3 indicated:1 grows:1 building:1 effect:1 facilitate:1 contain:1 y2:7 true:1 normalized:1 hence:1 symmetric:1 laboratory:1 spirtes:1 statslab:3 illustrated:1 conditionally:1 adjacent:1 x5:3 auc:6 m:2 criterion:2 octave:1 trying:1 evident:1 crf:2 complete:1 silva:5 fj:1 fi:7 common:16 console:3 specialized:1 mckee:1 empirically:1 unaccounted:1 endpoint:1 conditioning:6 linking:1 association:11 discussed:1 krebs:1 expressing:1 significant:1 cambridge:2 grid:2 seymore:1 similarity:1 depiction:2 add:1 something:1 multivariate:2 own:1 posterior:3 triangulation:3 recent:1 showed:1 belongs:1 reverse:1 scenario:2 binary:4 success:1 yi:15 staff:1 tempting:1 dashed:1 ii:6 u0:3 full:1 multiple:3 reduces:1 bach:1 retrieval:1 concerning:1 xgp:10 prediction:7 desideratum:1 regression:5 essentially:2 expectation:1 represent:2 kernel:10 sometimes:2 achieved:1 c1:4 affecting:1 want:1 grow:1 w2:1 extra:2 unlike:2 virtually:1 undirected:3 lafferty:1 spirit:1 jordan:1 extracting:1 structural:1 intermediate:1 easy:3 automated:1 affect:1 isolation:1 marginalization:1 w3:2 xj:3 independence:5 suboptimal:1 idea:2 texas:2 whether:1 motivated:1 linkage:1 york:1 cause:11 matlab:2 gpc:10 clear:3 shortage:1 nonparametric:1 processed:1 category:1 http:4 generate:1 exist:3 notice:4 neuroscience:1 hyperparameter:5 group:1 key:1 independency:1 four:1 preprocessed:2 graph:20 relaxation:1 year:2 sum:1 enforced:1 inverse:2 parameterized:1 family:2 separation:1 parsimonious:1 decision:1 internet:1 guaranteed:1 adapted:1 constraint:6 x2:10 encodes:2 aspect:1 argument:1 relatively:1 department:2 tv:1 according:3 poor:1 craven:1 describes:2 smaller:1 unity:1 wi:1 appealing:1 modification:1 explained:1 cuthill:1 taken:1 computationally:2 equation:3 vs1:1 discus:2 count:1 mechanism:2 know:3 sony:6 umich:1 available:2 apply:1 observe:1 generic:1 vs7:1 alternative:4 existence:2 original:6 vikas:1 assumes:1 remaining:3 denotes:2 graphical:7 unsuitable:1 ghahramani:3 approximating:1 disappear:1 respected:2 purchased:1 classical:1 added:1 parametric:2 dependence:2 traditional:1 diagonal:1 link:13 thank:1 separating:1 maryland:1 philip:5 sensible:2 mail:1 y5:3 collected:1 trivial:1 reason:2 besides:1 code:2 modeled:1 relationship:15 y4:2 length:1 illustration:1 u0ij:2 setup:4 relate:1 implementation:1 unknown:1 perform:2 markov:22 parametrizing:1 displayed:1 heterogeneity:2 relational:34 incorporated:2 y1:9 introduced:1 pair:5 c3:4 optimized:1 inclination:1 smo:1 rof:1 suggested:1 summarize:1 built:1 including:2 suitable:2 demanding:1 treated:1 getoor:1 predicting:1 indicator:6 representing:2 improve:3 picture:1 conic:1 extract:1 columbia:2 text:1 review:1 prior:9 acknowledgement:1 wisconsin:2 loss:1 bear:1 mixed:11 interesting:1 analogy:1 propagates:3 mercer:1 share:2 course:3 accounted:2 last:2 free:1 rasmussen:1 bias:1 weaker:1 neighbor:1 wide:1 slice:1 curve:3 xn:2 world:2 cumulative:3 qn:1 commonly:1 collection:1 far:1 approximate:4 citation:7 ignore:1 implicitly:1 clique:10 uai:1 xi:3 factorize:1 search:2 latent:4 why:1 table:5 nature:1 obtaining:1 nigam:2 domain:4 sp:1 main:1 motivation:1 whole:2 arise:2 allowed:1 repeated:1 categorized:1 body:1 x1:11 depicts:1 gatsby:2 ny:1 definiteness:1 pereira:1 exponential:1 down:1 bad:1 specific:1 explored:1 pz:1 evidence:4 concern:1 exists:1 adding:1 ci:2 phd:1 portal:1 conditioned:2 illustrates:2 demand:1 easier:2 depicted:3 likely:2 scalar:1 sindhwani:2 corresponds:7 conditional:5 goal:2 identity:1 labelled:1 shared:1 price:6 absence:1 professor:1 vs4:1 total:1 duality:1 people:1 arises:1 meant:1 violated:1 tested:1 phenomenon:1 |
2,510 | 3,277 | Catching Up Faster in Bayesian
Model Selection and Model Averaging
?
Tim van Erven
Peter Grunwald
Steven de Rooij
Centrum voor Wiskunde en Informatica (CWI)
Kruislaan 413, P.O. Box 94079
1090 GB Amsterdam, The Netherlands
{Tim.van.Erven,Peter.Grunwald,Steven.de.Rooij}@cwi.nl
Abstract
Bayesian model averaging, model selection and their approximations such as BIC
are generally statistically consistent, but sometimes achieve slower rates of convergence than other methods such as AIC and leave-one-out cross-validation. On
the other hand, these other methods can be inconsistent. We identify the catch-up
phenomenon as a novel explanation for the slow convergence of Bayesian methods. Based on this analysis we define the switch-distribution, a modification of the
Bayesian model averaging distribution. We prove that in many situations model
selection and prediction based on the switch-distribution is both consistent and
achieves optimal convergence rates, thereby resolving the AIC-BIC dilemma. The
method is practical; we give an efficient algorithm.
1
Introduction
We consider inference based on a countable set of models (sets of probability distributions), focusing
on two tasks: model selection and model averaging. In model selection tasks, the goal is to select
the model that best explains the given data. In model averaging, the goal is to find the weighted
combination of models that leads to the best prediction of future data from the same source.
An attractive property of some criteria for model selection is that they are consistent under weak
conditions, i.e. if the true distribution P ? is in one of the models, then the P ? -probability that this
model is selected goes to one as the sample size increases. BIC [14], Bayes factor model selection
[8], Minimum Description Length (MDL) model selection [3] and prequential model validation [5]
are examples of widely used model selection criteria that are usually consistent. However, other
model selection criteria such as AIC [1] and leave-one-out cross-validation (LOO) [16], while often inconsistent, do typically yield better predictions. This is especially the case in nonparametric
settings, where P ? can be arbitrarily well-approximated by a sequence of distributions in the (parametric) models under consideration, but is not itself contained in any of these. In many such cases,
the predictive distribution converges to the true distribution at the optimal rate for AIC and LOO
[15, 9], whereas in general BIC, the Bayes factor method and prequential validation only achieve
the optimal rate to within an O(log n) factor [13, 20, 6]. In this paper we reconcile these seemingly
conflicting approaches [19] by improving the rate of convergence achieved in Bayesian model selection without losing its convergence properties. First we provide an example to show why Bayes
sometimes converges too slowly.
Given priors on models M1 , M2 , . . . and parameters therein, Bayesian inference associates each
model Mk with the marginal distribution pk , given in (1), obtained by averaging over the parameters
according to the prior. In model selection the preferred model is the one with maximum a posteriori
probability. By Bayes? rule this is arg maxk pk (xn )w(k), where w(k) denotes the prior probability
of Mk . We can further average over model indices,
a process called Bayesian Model Averaging
P
(BMA). The resulting distribution pbma (xn ) = k pk (xn )w(k) can be used for prediction. In a se1
quential setting, the probability of a data sequence xn := x1 , . . . , xn under a distribution p typically
decreases exponentially fast in n. It is therefore common to consider ? log p(xn ), which we call the
codelength of xn achieved by p. We take all logarithms to base 2, allowing us to measure codelength
in bits. The name codelength refers to the correspondence between codelength functions and probability distributions based on the Kraft inequality, but one may also think of the codelength as the
accumulated log loss that is incurred if we sequentially predict the xi by
on the past,
Pconditioning
n
i.e. using p(?|xi?1 ) [3, 6, 5, 11]. For BMA, we have ? log pbma (xn ) = i=1 ? log pbma (xi |xi?1 ).
i?1
Here the ith term represents the loss incurred when predicting xi given
using pbma (?|xi?1 ),
P x
i?1
which turns out to be equal to the posterior average: pbma (xi |x ) = k pk (xi |xi?1 )w(k|xi?1 ).
Prediction using pbma has the advantage that the codelength it achieves on xn is close to the codelength of pk? , where k? is the index of best of the marginals p1 ,P
p2 , . . . Namely, given a prior w on
model indices, the difference between ? log pbma (xn ) = ? log( k pk (xn )w(k)) and ? log pk? (xn )
? whatever data xn are observed. Thus, using BMA for premust be in the range [0, ? log w(k)],
diction is sensible if we are satisfied with doing essentially as well as the best model under consideration. However, it is often possible to combine p1 , p2 , . . . into a distribution that achieves
smaller codelength than pk? ! This is possible if the index k? of the best distribution changes with
the sample size in a predictable way. This is common in model selection, for example with nested
models, say M1 ? M2 . In this case p1 typically predicts better at small sample sizes (roughly,
because M2 has more parameters that need to be learned than M1 ), while p2 predicts better
eventually. Figure 1 illustrates this phenomenon. It shows the accumulated codelength difference
? log p2 (xn ) ? (? log p1 (xn )) on ?The Picture of Dorian Gray? by Oscar Wilde, where p1 and p2
are the Bayesian marginal distributions for the first-order and second-order Markov chains, respectively, and each character in the book is an outcome. Note that the example models M1 and M2 are
very crude; for this particular application much better models are available. In more complicated,
more realistic model selection scenarios, the models may still be wrong, but it may not be known
how to improve them. Thus M1 and M2 serve as a simple illustration only. We used uniform priors
on the model parameters, but for other common priors similar behaviour can be expected. Clearly
p1 is better for about the first 100 000 outcomes, gaining a head start of approximately 40 000 bits.
Ideally we should predict the initial 100 000 outcomes using p1 and the rest using p2 . However, pbma
only starts to behave like p2 when it catches up with p1 at a sample size of about 310 000, when the
codelength of p2 drops below that of p1 . Thus, in the shaded area pbma behaves like p1 while p2 is
making better predictions of those outcomes: since at n = 100 000, p2 is 40 000 bits behind, and at
n = 310 000, it has caught up, in between it must have outperformed p1 by 40 000 bits!
Codelength difference with Markov order 1 (bits)
The general pattern that first one model is
60000
Markov order 2
better and then another occurs widely, both
Bayesian Model Averaging
Switch?Distribution
40000
on real-world data and in theoretical settings. We argue that failure to take this
20000
effect into account leads to the suboptimal
0
rate of convergence achieved by Bayes fac?20000
tor model selection and related methods.
We have developed an alternative method
?40000
to combine distributions p1 and p2 into a
?60000
single distribution psw , which we call the
?80000
switch-distribution, defined in Section 2.
Figure 1 shows that psw behaves like p1
?100000
0
50000 100000 150000 200000 250000 300000 350000 400000 450000
initially, but in contrast to pbma it starts
Sample size
to mimic p2 almost immediately after p2
Figure 1: The Catch-up Phenomenon
starts making better predictions; it essentially does this no matter what sequence xn is actually observed. psw differs from pbma in that it
is based on a prior distribution on sequences of models rather than simply a prior distribution on
models. This allows us to avoid the implicit assumption that there is one model which is best at
all sample sizes. After conditioning on past observations, the posterior we obtain gives a better
indication of which model performs best at the current sample size, thereby achieving a faster rate
of convergence. Indeed, the switch-distribution is related to earlier algorithms for tracking the best
expert developed in the universal prediction literature [7, 18, 17, 10]; however, the applications we
have in mind and the theorems we prove are completely different. In Sections 3 and 4 we show
that model selection based on the switch-distribution is consistent (Theorem 1), but unlike standard
2
Bayes factor model selection achieves optimal rates of convergence (Theorem 2). Proofs of the
theorems are in Appendix A. In Section 5 we give a practical algorithm that computes the switchdistribution for K (rather than 2) predictors in ?(n ? K) time. In the full paper, we will give further
details of the proof of Theorem 1 and a more detailed discussion of Theorem 2 and the implications
of both theorems.
2
The Switch-Distribution for Model Selection and Prediction
Preliminaries Suppose X ? = (X1 , X2 , . . .) is a sequence of random variables that take values
in sample space X ? Rd for some d ? Z+ = {1, 2, . . .}. For n ? N = {0, 1, 2, . . .}, let xn = (x1 ,
. . ., xn ) denote the first n outcomes of X ? , such that xn takes values
product space X n =
S? in the
0
?
n
X1 ? ? ? ? ? Xn . (We let x denote the empty sequence.) Let X = n=0 X . For m > n, we write
m
Xn+1
for (Xn+1 , . . ., Xm ), where m = ? is allowed and we omit the subscript when n = 0.
Any distribution P (X ? ) may be defined by a sequential prediction strategy p that predicts the
next outcome at any time n ? N. To be precise: Given the previous outcomes xn at time n, this
prediction strategy should issue a conditional density p(Xn+1 |xn ) with corresponding distribution
P (Xn+1 |xn ) for the next outcome Xn+1 . Such sequential prediction strategies are sometimes called
prequential forecasting systems [5]. An instance is given in Example 1 below. We assume that the
density p(Xn+1 |xn ) is taken relative to either the usual Lebesgue measure (if X is continuous)
or the counting measure (if X is countable). In the latter case p(Xn+1 |xn ) is a probability mass
function. It is natural to define the joint density p(xm |xn ) = p(xn+1 |xn ) ? ? ? p(xm |xm?1 ) and let
?
m
P (Xn+1
|xn ) be the unique distribution such that, for all m > n, p(Xn+1
|xn ) is the density of its
m
?
n
marginal distribution for Xn+1 . To ensure that P (Xn+1 |x ) is well-defined even if X is continuous,
we impose the natural requirement that for any k ? Z+ and any fixed event Ak+1 ? Xk+1 the
probability P (Ak+1 |xk ) is a measurable function of xk , which holds automatically if X is countable.
Model Selection and Prediction The goal in model selection is to choose an explanation for
observed data xn from a potentially infinite list of candidate models M1 , M2 , . . . We consider
parametric models, which are sets {p? : ? ? ?} of prediction strategies p? that are indexed by elements of ? ? Rd , for some smallest possible d ? N, the number of degrees of freedom. Examples
of model selection are regression based on a set of basis functions such as polynomials (d is the
number of coefficients of the polynomial), the variable selection problem in regression [15, 9, 20]
(d is the number of variables), and histogram density estimation [13] (d is the number of bins). A
model selection criterion is a function ? : X ? ? Z+ that, given any data sequence xn ? X ? , selects
the model Mk with index k = ?(xn ).
We associate each model Mk with a single prediction strategy p?k . The bar emphasizes that p?k is a
meta-strategy based on the prediction strategies in Mk . In many approaches to model selection, for
example AIC and LOO, p?k is defined using some estimator ??k for each model Mk , which maps a
sequence xn of previous observations to an estimated parameter value that represents a ?best guess?
of the true/best distribution in the model. Prediction is then based on this estimator: p?k (Xn+1 |
xn ) = p??k (xn ) (Xn+1 | xn ), which also defines a joint density p?k (xn ) = p?k (x1 ) ? ? ? p?k (xn |xn?1 ).
The Bayesian approach to model selection or model averaging goes the other way around. We start
out with a prior w on ?k , and define the Bayesian marginal density
Z
p?k (xn ) =
p? (xn )w(?) d?.
(1)
???k
When p?k (xn ) is non-zero this joint density induces a unique conditional density p?k (Xn+1 | xn ) =
p?k (Xn+1 , xn )/?
pk (xn ), which
R is equal to the mixture of p? ? Mk according to the posterior,
w(?|xn ) = p? (xn )w(?)/ p? (xn )w(?) d?, based on xn . Thus the Bayesian approach also defines a prediction strategy p?k (Xn+1 |xn ), whose corresponding distribution may be thought of as
an estimator. From now on we sometimes call the distributions induced by p?1 , p?2 , . . . ?estimators?,
even if they are Bayesian. This unified view is known as prequential or predictive MDL [11, 5].
Example 1. Suppose X = {0, 1}. Then a prediction strategy p? may be based on the Bernoulli
model M = {p? | ? ? [0, 1]} that regards X ? as a sequence of independent, identically distributed
Bernoulli random variables with P? (Xn+1 = 1) = ?. We may predict Xn+1 using the maximum
? n ) = n?1 Pn xi . The prediction for
likelihood (ML) estimator based on the past, i.e. using ?(x
i=1
x1 is then undefined. If we use a smoothed ML estimator such as the Laplace estimator, ??? (xn ) =
3
Pn
(n + 2)?1 ( i=1 xi + 1), then all predictions are well-defined. Perhaps surprisingly, the predictor
?
p? defined by p?? (Xn+1 | xn ) = p??? (xn ) (Xn+1 ) equals the Bayesian predictive distribution based on
a uniform prior. Thus in this case a Bayesian predictor and an estimation-based predictor coincide!
The Switch-Distribution Suppose p1 , p2 , . . . is a list of prediction strategies for X ? . (Although
here the list is infinitely long, the developments below can with little modification be adjusted to the
case where the list is finite.) We first define a family Q = {qs : s ? S} of combinator prediction
strategies that switch between the original prediction strategies. Here the parameter space S is
defined as
S = {(t1 , k1 ), . . . , (tm , km ) ? (N ? Z+ )m | m ? Z+ , 0 = t1 < . . . < tm }.
(2)
The parameter s ? S specifies the identities of m constituent prediction strategies and the sample
?
sizes, called switch-points, at which to switch between them. For s = ((t?1 , k1? ), . . . , (t?m? , km
? )), we
?
?
?
define ti (s) = ti , ki (s) = ki and m(s) = m . We omit the argument when the parameter s is clear
from context, e.g. we write t3 for t3 (s). For each s ? S the corresponding qs ? Q is defined as:
?
n
?
? pk1 (Xn+1 |x ) if n < t2 ,
?
?
n
?
?
? pk2 (Xn+1 |x ) if t2 ? n < t3 ,
..
..
qs (Xn+1 |xn ) =
(3)
.
.
?
?
?
n
?
pkm?1 (Xn+1 |x ) if tm?1 ? n < tm ,
?
?
? p (X
n
km
n+1 |x ) if tm ? n.
Switching to the same predictor multiple times is allowed. The extra switch-point t1 is included
to simplify notation; we always take t1 = 0. Now the switch-distribution is defined as a Bayesian
mixture of the elements of Q according to a prior ? on S:
Definition 1 (Switch-Distribution). Let ? be a probability mass function on S. Then the switchdistribution Psw with prior ? is the distribution for X ? such that, for any n ? Z+ , the density of its
marginal distribution for X n is given by
X
psw (xn ) =
qs (xn ) ? ?(s).
(4)
s?S
Although the switch-distribution provides a general way to combine prediction strategies, in this
paper it will only be applied to combine prediction strategies p?1 , p?2 , . . . that correspond to models.
In this case we may define a corresponding model selection criterion ?sw . To this end, let Kn+1 :
S ? Z+ be a random variable that denotes the strategy/model that is used to predict Xn+1 given
past observations xn . Formally, Kn+1 (s) = ki (s) iff ti (s) ? n and i = m(s) ? n < ti+1 (s).
Algorithm 1, given in Section 5, efficiently computes the posterior distribution on Kn+1 given xn :
P
n
{s:Kn+1 (s)=k} ? s qs (x )
n
?(Kn+1 = k | x ) =
,
(5)
psw (xn )
which is defined whenever psw (xn ) is non-zero. We turn this into a model selection criterion
?sw (xn ) = arg maxk ?(Kn+1 = k|xn ) that selects the model with maximum posterior probability.
3
Consistency
If one of the models, say with index k ? , is actually true, then it is natural to ask whether ?sw is
consistent, in the sense that it asymptotically selects k ? with probability 1. Theorem 1 below states
that this is the case under certain conditions which are only slightly stronger than those required for
the consistency of standard Bayes factor model selection.
Bayes factor model selection is consistent if for all k, k ? 6= k, P?k (X ? ) and P?k? (X ? ) are mutually
singular, that is, if there exists a measurable set A ? X ? such that P?k (A) = 1 and P?k? (A) = 0 [3].
For example, this can usually be shown to hold if the models are nested and for each k, ?k is a subset
of ?k+1 of wk+1 -measure 0 [6]. For consistency of ?sw , we need to strengthen this to the requirement
?
?
that, for all k ? 6= k and all xn ? X ? , the distributions P?k (Xn+1
| xn ) and P?k? (Xn+1
| xn ) are
mutually singular. For example, if X1 , X2 , . . . are i.i.d. according to each P? in all models, but also
if X is countable and p?k (xn+1 | xn ) > 0 for all k, all xn+1 ? X n+1 , then this conditional mutual
singularity is automatically implied by ordinary mutual singularity of P?k (X ? ) and P?k? (X ? ).
4
Let Es = {s? ? S | m(s? ) > m(s), (ti (s? ), ki (s? )) = (ti (s), ki (s)) for i = 1, . . . , m(s)} denote
the set of all possible extensions of s to more switch-points. Let p?1 , p?2 , . . . be Bayesian prediction
strategies with respective parameter spaces ?1 , ?2 , . . . and priors w1 , w2 , . . ., and let ? be the prior
of the corresponding switch-distribution.
Theorem 1 (Consistency of the Switch-Distribution). Suppose ? is positive everywhere on {s ?
S | m(s) = 1} and is such that there exists a positive constant c such that, for every s ? S,
?
?
c ? ?(s) ? ?(Es ). Suppose further that P?k (Xn+1
| xn ) and P?k? (Xn+1
| xn ) are mutually singular
?
+
?
n
?
?
+
?
for all k, k ? Z , k 6= k , x ? X . Then, for all k ? Z , for all ? ? ?k? except for a subset of
?k? of wk? -measure 0, the posterior distribution on Kn+1 satisfies
n??
?(Kn+1 = k ? | X n ) ?? 1
with P?? -probability 1.
The requirement that c ? ?(s) ? ?(Es ) is automatically satisfied if ? is of the form:
m
Y
?(s) = ?M (m)?K (k1 )
?T (ti |ti > ti?1 )?K (ki ),
(6)
(7)
i=2
where ?M , ?K and ?T are priors on Z+ with full support, and ?M is geometric: ?M (m) = ?m?1 (1 ? ?)
for some 0 ? ? < 1. In this case c = ?/(1 ? ?).
4
Optimal Risk Convergence Rates
Suppose X1 , X2 , . . . are distributed according to P ? . We define the risk at sample size n ? 1 of the
estimator P? relative to P ? as
Rn (P ? , P? ) = EX n?1 ?P ? [D(P ? (Xn = ? | X n?1 )kP? (Xn = ? | X n?1 ))],
where D(?k?) is the Kullback-Leibler (KL) divergence [4]. This is the standard definition of risk
relative to KL divergence. The risk is always well-defined, and equal to 0 if P? (Xn+1 | X n ) is
equal to P ? (Xn+1 | X n ). The following identity connects information-theoretic redundancy and
accumulated statistical risk (see [4] or [6, Chapter 15]): If P ? admits a density p? , then for all
prediction strategies p?,
n
X
n
?
n
n
?
EX ?P [? log p?(X ) + log p (X )] =
Ri (P ? , P? ).
(8)
i=1
S
For a union of parametric models M = k?1 Mk , we define the information closure hMi =
{P ? | inf P ?M D(P ? kP ) = 0}, i.e. the set of distributions for X ? that can be arbitrarily well
approximated by elements of M. Theorem 2 below shows that, for a very large class of P ? ? hMi,
the switch-distribution defined relative to estimators P?1 , P?2 , . . . achieves the same risk as any other
model selection criterion defined with respect to the same estimators, up to lower order terms; in
other words, model averaging based on the switch-distribution achieves at least the same rate of
convergence as model selection based on any model selection criterion whatsoever (the issue of
averaging vs selection will be discussed at length in the full paper). The theorem requires that the
prior ? in (4) is of the form (7), and satisfies
? log ?M (m) = O(m) ; ? log ?K (k) = O(log k) ; ? log ?T (t) = O(log t).
(9)
Thus, ?M , the prior on the total number of switch points, is allowed to decrease either polynomially
or exponentially (as required for Theorem 1); ?T and ?K must decrease polynomially. For example,
we could set ?T (t) = ?K (t) = 1/(t(t + 1)), or we could take the universal prior on the integers [12].
Let M? ? hMi be some subset of interest of the information closure of model M. M? may consist
of just a single, arbitrary distribution P ? in hMi\M ? in that case Theorem 2 shows that the switchdistribution converges as fast as any other model selection criterion on any distribution in hMi that
cannot be expressed parametrically relative to M ? or it may be a large, nonparametric family. In
that case, Theorem 2 shows that the switch-distribution achieves the minimax convergence rate. For
example, if the models Mk are k-bin histograms [13], then hMi contains every distribution on
[0, 1] with bounded continuous densities, and we may, for example, take M? to be the set of all
distributions on [0, 1] which have a differentiable density p? such that p? (x) and (d/dx)p? (x) are
bounded from below and above by some positive constants.
We restrict ourselves to model selection criteria which, at sample size n, never select a model Mk
with k > n? for some arbitrarily large but fixed ? > 0; note that this condition will be met for most
5
practical model selection criteria. Let h : Z+ ? R+ denote the minimax optimal achievable risk as
a function of the sample size, i.e.
h(n) =
inf
sup sup Rn? (P ? , P?? ),
(10)
n
?
?:X ?{1,2,...,?n ?} P ? ?M? n? ?n
where the infimum is over all model selection criteria restricted to sample size n, and ??? denotes
rounding up to the nearest integer. p?? is the prediction strategy satisfying, for all n? ? n, all
?
?
?
?
xn ? X n , p?? (Xn? +1 | xn ) := p??(xn ) (Xn? +1 | xn ), i.e. at sample size n it predicts xn+1 using
n
p?k for the k = ?(X ) chosen by ?, and it keeps predicting future xn? +1 by this k. We call h(n)
the minimax optimal rate of convergence for model selection relative to data from M? , model list
M1 , M2 , . . ., and estimators P?1 , P?2 , . . . The definition is slightly nonstandard, in that we require a
second supremum over n? ? n. This is needed because, as will be discussed in the full paper, it can
sometimes happen that, for some P ? , some k, some n? > n, Rn? (P ? , P?k ) > Rn (P ? , P?k ) (see also
[4, Section 7.1]). In cases where this cannot happen, such as regression with standard
PnML estimators,
and in cases where, uniformly for all k, supn? ?n Rn? (P ? , P?k )?Rn (P ? , P?k ) = o( i=1 h(i)) (in the
full paper we show that this holds for, for example, histogram density estimation), our Theorem 2
also implies minimax convergence in terms of the standard definition, without the supn? ?n . We
expect that the supn? ?n can be safely ignored for most ?reasonable? models and estimators.
Theorem 2. Define Psw for some model class M = ?k?1 Mk as in (4), where the prior ? satisfies (9). Let M? be a subset of hMi with minimax rate h such that nh(n) is increasing, and
nh(n)/(log n)2 ? ?. Then
Pn
supP ? ?M? i=1 Ri (P ? , Psw )
Pn
lim sup
? 1.
(11)
n??
i=1 h(i)
The requirement that nh(n)/(log n)2 ? ? will typically be satisfied whenever M? \ M is
nonempty. Then M? contains P ? that are ?nonparametric? relative to the chosen sequence of models M1 , M2 , . . . Thus, the problem should not be ?too simple?: we do not know whether (11) holds
in the parametric setting where P ? ? Mk for some k on the list. Theorem 2 expresses that the
accumulated risk of the switch-distribution, as n increases, is not significantly larger than the accumulated risk of any other procedure. This ?convergence in sum? has been considered before by,
for example, [13, 4], and is compared to ordinary convergence in the full paper, where we will also
give example applications of the theorem and further discuss (10). The proof works by bounding
the redundancy of the switch-distribution, which, by (8), is identical to the accumulated risk. It is
not clear whether similar techniques can be used to bound the individual risk.
5
Computing the Switch-Distribution
Algorithm 1 sequentially computes the posterior probability on predictors p1 , p2 , . . .. It requires that
? is a prior of the form in (7), and ?M is geometric, as is also required for Theorem 1 and permitted
in Theorem 2. The algorithm resembles F IXED -S HARE [7], but whereas F IXED -S HARE implicitly
imposes a geometric distribution for ?T , we allow general priors by varying the shared weight with
n. We do require slightly more space to cope with ?M .
Algorithm 1 S WITCH(xN )
? K is the number of experts; ? is as in the definition of ?M .
for k = 1, . . . , K do initialise wka ? ? ? ?K (k); wkb ? (1 ? ?) ? ?K (k) od
a
Report prior ?(K1 ) = wK
(a K-sized array)
1
for n = 1, . . . , N do
for k = 1, . . . , K do wka ? wka ?P
pk (xn |xn?1 ); wkb ? wkb ? pk (xn |xn?1 ) od (loss update)
pool ? ?T (Z = n | Z ? n) ? k wka
(share update)
for k = 1, . . . , K do
wka ? wka ? ?T (Z 6= n | Z ? n) +
? ? pool ? ?K (k)
wkb ? wkb
+ (1 ? ?) ? pool ? ?K (k)
od
P
a
b
Report posterior ?(Kn+1 | xn ) = (wK
+ wK
)/ k (wka + wkb )
n+1
n+1
(a K-sized array)
od
This algorithm can be used to obtain fast convergence in the sense of Theorem 2, which can be
extended to cope with a restriction to only the first K experts. Theorem 1 can be extended to show
6
consistency in this case as well. If ?T (Z = n | Z ? n) and ?K (k) can be computed in constant time,
then the running time is ?(N ? K), which is of the same order as that of fast model selection criteria
like AIC and BIC. We will explain this algorithm in more detail in a forthcoming publication.
Acknowledgements We thank Y. Mansour, whose remark over lunch at COLT 2005 sparked off
all this research. We thank P. Harremo?es and W. Koolen for mathematical support. This work was
supported in part by the IST Programme of the European Community, under the PASCAL Network
of Excellence, IST-2002-506778. This publication only reflects the authors? views.
A
Proofs
Proof of Theorem 1. Let Un = {s ? S | Kn+1 (s) 6= k ? } denote the set of ?bad? parameters s that
select an incorrect model. It is sufficient to show that
P
n
s?Un ? s qs (X )
lim P
=0
with P?k? -probability 1.
(12)
n
n??
s?S ? s qs (X )
To see this, suppose the theorem is false. Then there exists a ? ? ?k? with wk? (?) > 0 such that
(6) does not hold for any ?? ? ?. But then by definition of P?k? we have a contradiction with (12).
Now let A = {s ? S : km (s) 6= k ? } denote the set of parameters that are bad for sufficiently large n.
We observe that for each s? ? Un there exists at least one element s ? A that uses the same sequence
of switch-points and predictors on the first n + 1 outcomes (this implies that Ki (s) = Ki (s? ) for
i = 1, . . . , n + 1) and has no switch-points beyond n (i.e. tm (s) ? n). Consequently, either s? = s
or s? ? Es . Therefore
X
X
X
?(s? )qs? (xn ) ?
(?(s) + ?(Es )) qs (xn ) ? (1 + c)
?(s)qs (xn ).
(13)
s? ?Un
s?A
n
Defining the mixture r(x ) =
P
s?A
n
s?A
?(s)qs (x ), we will show that
n
r(X )
=0
with P?k? -probability 1.
(14)
?(s = (0, k ? )) ? p?k? (X n )
P
Using (13) and the fact that s?S ?(s)qs (xn ) ? ?(s = (0, k ? )) ? p?k? (xn ), this implies (12). For
all s ? A and xtm (s) ? X tm (s) , by definition Qs (Xt?
|xtm ) equals P?km (Xt?
|xtm ), which is
m +1
m +1
?
t
m
?
mutually singular with Pk? (Xtm +1 |x ) by assumption. If X is a separable metric space, which
holds because X ? Rd for some d ? Z+ , it can be shown that this conditional mutual singularity
implies mutual singularity of Qs (X ? ) and P?k? (X ? ). To see this for countable X , let Bxtm be any
event such that Qs (Bxtm |xtm ) = 1 and P?k? (Bxtm |xtm ) = 0. Then, for B = {y ? ? X ? | yt?
?
m +1
Bytm }, we have that Qs (B) = 1 and P?k? (B) = 0. In the uncountable case, however, B may not be
measurable. We omit the full proof, which was shown to us by P. Harremo?es. Any countable mixture
of distributions that are mutually singular with Pk? , in particular R, is mutually singular with Pk? .
This implies (14) by Lemma 3.1 of [2], which says that for any two mutually singular distributions
R and P , the density ratio r(X n )/p(X n ) goes to 0 as n ? ? with P -probability 1.
lim
n??
Proof of Theorem 2. We will show that for every ? > 1,
n
n
n
X
X
X
?
sup
Ri (P , Psw ) ? ?
h(i) + ??,n
h(i),
P ? ?M? i=1
i=1
n??
(15)
i=1
where ??,n ?? 0, and ??,1 , ??,2 , . . . are fixed constants that only depend on ?, but not on the
chosen subset M? of hMi. Theorem 2 is a consequence of (15), which we will proceed to prove.
Let ?n : X n ? {1, . . . , ?n? ?} be a model selection criterion, restricted to samples of size n, that
is minimax optimal, i.e. it achieves the infimum in (10). If such a ?n does not exist, we take a ?n
that is almost minimax optimal in the sense that it achieves the infimum to within h(n)/n. For
j ? 1, let tj = ??j?1 ? ? 1. Fix an arbitrary n > 0 and let m be the unique integer such that
tm < n ? tm+1 . We will first show that for arbitrary xn , psw achieves redundancy not much worse
than qs with s = (t1 , k1 ), . . . , (tm , km ), where ki = ?ti (xti ). Then we show that the redundancy of
this qs is small enough for (15) to hold. Thus, to achieve this redundancy, it is sufficient to take only
a logarithmic number m ? 1 of switch-points: m ? 1 < log? (n + 1). Formally, we have, for some
c > 0, uniformly for all n, xn ? X n ,
7
n
? log psw (x ) = ? log
X
n
?
n
q (x )?(s ) ? ? log qs (x ) ? log ?M (m) ?
s?
s? ?S
m
X
log ?T (tj )?K (kj )
j=1
? ? log qs (xn ) + c log(n + 1) + cm(? + 1) log n = ? log qs (xn ) + O((log n)2 ). (16)
Here the second inequality follows because of (9), and the final equality follows because m ?
log? (n + 1) + 1. Now fix any P ? ? hMi. Since P ? ? hMi, it must have some density p? . Thus,
applying (8), and then (16), and then (8) again, we find that
n
X
Ri (P ? , Psw ) = EX n ?P ? [? log psw (X n ) + log p? (X n )]
i=1
? EX n ?P ? [? log qs (X n ) + log p? (X n )] + O((log n)2 )
=
n
X
i=1
Ri (P ? , Qs ) + O((log n)2 ) =
j+1 ,n}
m min{t
X
X
j=1
Ri (P ? , P?kj ) + O((log n)2 ). (17)
i=tj +1
For i appearing in the second sum, with tj < i ? tj+1 , we have Ri (P ? , P?kj ) ?
supi? ?tj+1 Ri? (P ? , P?kj ) = supi? ?tj+1 Ri? (P ? , P??t (xtj ) ) ? h(tj + 1), so that
j
1
1
tj+1
Ri (P ? , P?kj ) ?
? (tj + 1)h(tj + 1) ?
? ih(i) ?
h(i) ? ?h(i),
tj + 1
tj + 1
tj + 1
where the middle inequality follows because nh(n) is increasing (condition (b) of the theorem).
Pm Pmin{tj+1 ,n}
Pn
Summing over i, we get j=1 i=tj +1
Ri (P ? , P?kj ) ? ? i=1 h(i). Combining this with
Pn
P
n
(17), it follows that i=1 Ri (P ? , Psw ) ? ? i=1 h(i) + O((log n)2 ). Because this holds for arbi?
?
trary P ? M (with the constant in the O notation not depending on P ? ), (15) now follows by the
requirement of Theorem 2 that nh(n)/(log n)2 ? ?.
References
[1] H. Akaike. A new look at statistical model identification. IEEE T. Automat. Contr., 19(6):716?723, 1974.
[2] A. Barron. Logically Smooth Density Estimation. PhD thesis, Stanford University, Stanford, CA, 1985.
[3] A. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling.
IEEE T. Inform. Theory, 44(6):2743?2760, 1998.
[4] A. R. Barron. Information-theoretic characterization of Bayes performance and the choice of priors in
parametric and nonparametric problems. In Bayesian Statistics 6, pages 27?52, 1998.
[5] A. P. Dawid. Statistical theory: The prequential approach. J. Roy. Stat. Soc. A, 147, Part 2:278?292, 1984.
[6] P. D. Gr?unwald. The Minimum Description Length Principle. The MIT Press, 2007.
[7] M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32:151?178, 1998.
[8] R. E. Kass and A. E. Raftery. Bayes factors. J. Am. Stat. Assoc., 90(430):773?795, 1995.
[9] K. Li. Asymptotic optimality of cp , cl , cross-validation and generalized cross-validation: Discrete index
set. Ann. Stat., 15:958?975, 1987.
[10] C. Monteleoni and T. Jaakkola. Online learning of non-stationary sequences. In Advances in Neural
Information Processing Systems, volume 16, Cambridge, MA, 2004. MIT Press.
[11] J. Rissanen. Universal coding, information, prediction, and estimation. IEEE T. Inform. Theory, IT-30(4):
629?636, 1984.
[12] J. Rissanen. Stochastic Complexity in Statistical Inquiry. World Scientific, 1989.
[13] J. Rissanen, T. P. Speed, and B. Yu. Density estimation by stochastic complexity. IEEE T. Inform. Theory,
38(2):315?323, 1992.
[14] G. Schwarz. Estimating the dimension of a model. Ann. Stat., 6(2):461?464, 1978.
[15] R. Shibata. Asymptotic mean efficiency of a selection of regression variables. Ann. I. Stat. Math., 35:
415?423, 1983.
[16] M. Stone. An asymptotic equivalence of choice of model by cross-validation and Akaike?s criterion. J.
Roy. Stat. Soc. B, 39:44?47, 1977.
[17] P. Volf and F. Willems. Switching between two universal source coding algorithms. In Proceedings of the
Data Compression Conference, Snowbird, Utah, pages 491?500, 1998.
[18] V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35:247?282, 1999.
[19] Y. Yang. Can the strengths of AIC and BIC be shared? Biometrica, 92(4):937?950, 2005.
[20] Y. Yang. Model selection for nonparametric regression. Statistica Sinica, 9:475?499, 1999.
8
| 3277 |@word middle:1 achievable:1 compression:1 polynomial:2 stronger:1 km:6 closure:2 automat:1 thereby:2 initial:1 contains:2 erven:2 past:4 current:1 ka:1 od:4 dx:1 must:3 realistic:1 happen:2 drop:1 update:2 v:1 stationary:1 selected:1 guess:1 warmuth:1 xk:3 ith:1 provides:1 characterization:1 math:1 mathematical:1 incorrect:1 prove:3 combine:4 excellence:1 indeed:1 expected:1 roughly:1 p1:15 automatically:3 little:1 xti:1 increasing:2 estimating:1 notation:2 bounded:2 mass:2 codelength:11 what:1 cm:1 developed:2 unified:1 whatsoever:1 safely:1 every:3 ti:10 wrong:1 assoc:1 whatever:1 omit:3 t1:5 positive:3 before:1 consequence:1 switching:2 ak:2 subscript:1 approximately:1 therein:1 resembles:1 equivalence:1 shaded:1 range:1 statistically:1 practical:3 unique:3 union:1 differs:1 procedure:1 area:1 universal:4 thought:1 significantly:1 word:1 refers:1 get:1 cannot:2 close:1 selection:42 context:1 risk:11 applying:1 derandomizing:1 restriction:1 measurable:3 map:1 yt:1 go:3 caught:1 immediately:1 m2:8 rule:1 estimator:13 q:23 array:2 contradiction:1 pk2:1 initialise:1 laplace:1 suppose:7 strengthen:1 losing:1 us:1 akaike:2 associate:2 element:4 dawid:1 approximated:2 centrum:1 satisfying:1 roy:2 predicts:4 observed:3 steven:2 voor:1 decrease:3 prequential:5 predictable:1 complexity:2 ideally:1 depend:1 predictive:3 dilemma:1 serve:1 efficiency:1 kraft:1 completely:1 basis:1 joint:3 chapter:1 fast:4 fac:1 kp:2 outcome:9 ixed:2 whose:2 widely:2 larger:1 stanford:2 say:3 statistic:1 think:1 itself:1 final:1 seemingly:1 online:1 sequence:12 advantage:1 indication:1 differentiable:1 product:1 combining:1 iff:1 achieve:3 description:3 constituent:1 convergence:16 empty:1 requirement:5 leave:2 converges:3 wilde:1 tim:2 depending:1 stat:6 snowbird:1 nearest:1 p2:15 soc:2 implies:5 met:1 stochastic:3 bin:2 explains:1 require:2 behaviour:1 fix:2 preliminary:1 singularity:4 adjusted:1 extension:1 hold:8 around:1 considered:1 sufficiently:1 predict:4 tor:1 achieves:10 smallest:1 estimation:6 outperformed:1 bma:3 schwarz:1 weighted:1 reflects:1 mit:2 clearly:1 always:2 rather:2 avoid:1 pn:6 varying:1 jaakkola:1 publication:2 cwi:2 bernoulli:2 likelihood:1 logically:1 contrast:1 sense:3 am:1 posteriori:1 inference:2 contr:1 accumulated:6 xtm:6 typically:4 initially:1 shibata:1 selects:3 arg:2 issue:2 colt:1 pascal:1 development:1 mutual:4 marginal:5 equal:6 never:1 identical:1 represents:2 look:1 yu:2 future:2 mimic:1 t2:2 report:2 simplify:1 divergence:2 individual:1 xtj:1 connects:1 lebesgue:1 ourselves:1 freedom:1 interest:1 mdl:2 mixture:4 nl:1 undefined:1 behind:1 tj:16 chain:1 implication:1 respective:1 indexed:1 logarithm:1 catching:1 theoretical:1 mk:12 instance:1 se1:1 earlier:1 modeling:1 ordinary:2 subset:5 parametrically:1 uniform:2 predictor:7 rounding:1 gr:1 too:2 loo:3 kn:10 density:18 herbster:1 off:1 pool:3 w1:1 again:1 thesis:1 satisfied:3 choose:1 slowly:1 worse:1 book:1 expert:4 pmin:1 li:1 supp:1 account:1 wkb:6 de:2 coding:3 wk:6 coefficient:1 matter:1 view:2 doing:1 sup:4 start:5 bayes:10 complicated:1 dorian:1 efficiently:1 yield:1 identify:1 t3:3 correspond:1 weak:1 bayesian:18 identification:1 emphasizes:1 nonstandard:1 explain:1 inquiry:1 inform:3 monteleoni:1 whenever:2 definition:7 failure:1 hare:2 proof:7 ask:1 lim:3 actually:2 focusing:1 permitted:1 box:1 just:1 implicit:1 pk1:1 hand:1 defines:2 infimum:3 gray:1 perhaps:1 scientific:1 name:1 effect:1 utah:1 true:4 equality:1 leibler:1 attractive:1 criterion:15 generalized:1 stone:1 theoretic:2 performs:1 cp:1 consideration:2 novel:1 common:3 witch:1 behaves:2 koolen:1 conditioning:1 exponentially:2 nh:5 volume:1 discussed:2 m1:8 marginals:1 cambridge:1 rd:3 consistency:5 pm:1 base:1 posterior:8 inf:2 scenario:1 certain:1 inequality:3 meta:1 arbitrarily:3 harremo:2 minimum:3 impose:1 biometrica:1 resolving:1 full:7 multiple:1 smooth:1 faster:2 cross:5 long:1 prediction:32 regression:5 essentially:2 metric:1 supi:2 histogram:3 sometimes:5 achieved:3 whereas:2 singular:7 source:2 extra:1 rest:1 unlike:1 w2:1 induced:1 pkm:1 inconsistent:2 call:4 integer:3 counting:1 yang:2 identically:1 enough:1 switch:28 bic:6 forthcoming:1 restrict:1 suboptimal:1 tm:10 whether:3 gb:1 forecasting:1 wiskunde:1 peter:2 proceed:1 remark:1 ignored:1 generally:1 detailed:1 clear:2 netherlands:1 nonparametric:5 induces:1 informatica:1 specifies:1 exist:1 estimated:1 write:2 discrete:1 express:1 ist:2 redundancy:5 rissanen:4 rooij:2 achieving:1 asymptotically:1 sum:2 everywhere:1 oscar:1 almost:2 family:2 reasonable:1 appendix:1 bit:5 ki:9 bound:1 aic:7 correspondence:1 strength:1 sparked:1 x2:3 ri:12 speed:1 argument:1 min:1 optimality:1 separable:1 according:5 combination:1 smaller:1 slightly:3 character:1 lunch:1 modification:2 making:2 restricted:2 taken:1 mutually:7 turn:2 eventually:1 nonempty:1 discus:1 needed:1 mind:1 know:1 end:1 available:1 observe:1 barron:3 appearing:1 alternative:1 slower:1 original:1 denotes:3 running:1 ensure:1 uncountable:1 sw:4 k1:5 especially:1 implied:1 occurs:1 parametric:5 strategy:20 usual:1 supn:3 thank:2 sensible:1 argue:1 length:4 index:7 illustration:1 ratio:1 sinica:1 potentially:1 countable:6 allowing:1 observation:3 willems:1 markov:3 finite:1 behave:1 situation:1 maxk:2 extended:2 head:1 precise:1 defining:1 rn:6 mansour:1 smoothed:1 arbitrary:3 community:1 namely:1 required:3 kl:2 learned:1 conflicting:1 diction:1 beyond:1 bar:1 usually:2 below:6 pattern:1 xm:4 gaining:1 explanation:2 event:2 natural:3 predicting:2 minimax:7 improve:1 picture:1 raftery:1 catch:3 kj:6 prior:23 literature:1 geometric:3 acknowledgement:1 relative:7 asymptotic:3 loss:3 expect:1 validation:7 incurred:2 degree:1 sufficient:2 consistent:7 imposes:1 principle:2 quential:1 share:1 surprisingly:1 supported:1 allow:1 combinator:1 van:2 regard:1 distributed:2 dimension:1 xn:127 world:2 computes:3 author:1 coincide:1 programme:1 polynomially:2 cope:2 preferred:1 kullback:1 implicitly:1 keep:1 supremum:1 ml:2 volf:1 sequentially:2 summing:1 xi:12 continuous:3 un:4 why:1 ca:1 improving:1 hmi:10 european:1 cl:1 pk:14 statistica:1 bounding:1 reconcile:1 allowed:3 x1:8 grunwald:2 en:1 slow:1 candidate:1 crude:1 theorem:28 bad:2 xt:2 list:6 admits:1 exists:4 consist:1 ih:1 false:1 sequential:2 phd:1 illustrates:1 logarithmic:1 simply:1 infinitely:1 amsterdam:1 contained:1 expressed:1 tracking:2 nested:2 satisfies:3 ma:1 conditional:4 goal:3 identity:2 sized:2 consequently:1 ann:3 shared:2 change:1 included:1 infinite:1 except:1 uniformly:2 vovk:1 averaging:11 lemma:1 called:3 total:1 e:7 unwald:1 select:3 formally:2 support:2 latter:1 phenomenon:3 ex:4 |
2,511 | 3,278 | Spatial Latent Dirichlet Allocation
Xiaogang Wang and Eric Grimson
Computer Science and Artificial Intelligence Lab
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
xgwang@csail.mit.edu, welg@csail.mit.edu
Abstract
In recent years, the language model Latent Dirichlet Allocation (LDA), which
clusters co-occurring words into topics, has been widely applied in the computer
vision field. However, many of these applications have difficulty with modeling
the spatial and temporal structure among visual words, since LDA assumes that a
document is a ?bag-of-words?. It is also critical to properly design ?words? and
?documents? when using a language model to solve vision problems. In this paper, we propose a topic model Spatial Latent Dirichlet Allocation (SLDA), which
better encodes spatial structures among visual words that are essential for solving
many vision problems. The spatial information is not encoded in the values of
visual words but in the design of documents. Instead of knowing the partition of
words into documents a priori, the word-document assignment becomes a random
hidden variable in SLDA. There is a generative procedure, where knowledge of
spatial structure can be flexibly added as a prior, grouping visual words which are
close in space into the same document. We use SLDA to discover objects from a
collection of images, and show it achieves better performance than LDA.
1
Introduction
Latent Dirichlet Allocation (LDA) [1] is a language model which clusters co-occurring words into
topics. In recent years, LDA has been widely used to solve computer vision problems. For example,
LDA was used to discover objects from a collection of images [2, 3, 4] and to classify images into
different scene categories [5]. [6] employed LDA to classify human actions. In visual surveillance,
LDA was used to model atomic activities and interactions in a crowded scene [7]. In these applications, LDA clustered low-level visual words (which were image patches, spatial and temporal
interest points or moving pixels) into topics with semantic meanings (which corresponded to objects,
parts of objects, human actions or atomic activities) utilizing their co-occurrence information.
Even with these promising achievements, however, directly borrowing a language model to solve
vision problems has some difficulties. First, LDA assumes that a document is a bag of words,
such that spatial and temporal structures among visual words, which are meaningless in a language
model but important in many computer vision problems, are ignored. Second, users need to define
the meaning of ?documents? in vision problems. The design of documents often implies some
assumptions on vision problems. For example, in order to cluster image patches, which are treated
as words, into classes of objects, researchers treated images as documents [2]. This assumes that
if two types of patches are from the same object class, they often appear in the same images. This
assumption is reasonable, but not strong enough. As an example shown in Figure 1, even though
sky is far from vehicles, if they often exist in the same images in some data set, they would be
clustered into the same topic by LDA. Furthermore, since in this image most of the patches are sky
and building, a patch on a vehicle is likely to be labeled as building or sky as well. These problems
could be solved if the document of a patch, such as the yellow patch in Figure 1, only includes other
1
Figure 1: There will be some problems (see text) if the whole image is treated as one document
when using LDA to discover classes of objects.
patches falling within its neighborhood, marked by the red dashed window in Figure 1, instead of
the whole image. So a better assumption is that if two types of image patches are from the same
object class, they are not only often in the same images but also close in space. We expect to utilize
spatial information in a flexible way when designing documents for solving vision problems.
In this paper, we propose a Spatial Latent Dirichlet Allocation (SLDA) model which encodes the
spatial structure among visual words. It clusters visual words (e.g. an eye patch and a nose patch),
which often occur in the same images and are close in space, into one topic (e.g. face). This is
a more proper assumption for solving many vision problems when images often contain several
objects. It is also easy for SLDA to model activities and human actions by encoding temporal
information. However the spatial or temporal information is not encoded in the values of visual
words, but in the design of documents. LDA and its extensions, such as the author-topic model [8],
the dynamic topic model [9], and the correlated topic model [10], all assume that the partition
of words into documents is known a priori. A key difference of SLDA is that the word-document
assignment becomes a hidden random variable. There is a generative procedure to assign words to
documents. When visual words are close in space or time, they have a high probability to be grouped
into the same document. Some approaches such as [11, 3, 12, 4] could also capture some spatial
structures among visual words. [11] assumed that the spatial distribution of an object class could
be modeled as Gaussian and the number of objects in the image was known. Both [3] and [4] first
roughly segmented images using graph cuts and added spatial constraint using these segments. [12]
modeled the spatial dependency among image patches as Markov random fields.
As an example application, we use the SLDA model to discover objects from a collection of images.
As shown in Figure 2, there are different classes of objects, such as cows, cars, faces, grasses,
sky, bicycles, etc., in the image set. And an image usually contains several objects of different
classes. The goal is to segment objects from images, and at the same time, to label these segments
as different object classes in an unsupervised way. It integrates object segmentation and recognition.
In our approach images are divided into local patches. A local descriptor is computed for each
image patch and quantized into a visual word. Using topic models, the visual words are clustered
into topics which correspond to object classes. Thus an image patch can be labeled as one of the
object classes. Our work is related to [2] which used LDA to cluster image patches. As shown in
Figure 2, SLDA achieves much better performance than LDA. We will compare more results of
LDA and SLDA in the experimental section.
2
Computation of Visual Words
To obtain the local descriptors, images are convolved with the filter bank proposed in [13], which is
a combination of 3 Gaussians, 4 Laplacian of Gaussians, and 4 first order derivatives of Gaussians,
and was shown to have good performance for object categorization. Instead of only computing
visual words at interest points as in [2], we divide an image into local patches on a grid and densely
sample a local descriptor for each patch. A codebook of size W is created by clustering all the
local descriptors in the image set using K-means. Each local patch is quantized into a visual word
according to the codebook. In the next step, these visual words (image patches) will be further
clustered into classes of objects. We will compare two clustering methods, LDA and SLDA.
2
Figure 2: Given a collection of images as shown in the first row (which are selected from the MSRC
image dataset [13]), the goal is to segment images into objects and cluster these objects into different
classes. The second row uses manual segmentation and labeling as ground truth. The third row is
the LDA result and the fourth row is the SLDA result. Under the same labeling approach, image
patches marked in the same color are in one object cluster, but the meaning of colors changes across
different labeling methods.
3
LDA
When LDA is used to solve our problem, we treat local patches of images as words and the whole
image as a document. The graphical model of LDA is shown in Figure 3 (a). There are M documents (images) in the corpus. Each document j has Nj words (image patches). wji is the observed
value of word i in document j. All the words in the corpus will be clustered into K topics (classes
of objects). Each topic k is modeled as a multinomial distribution over the codebook. and ? are
Dirichlet prior hyperparameters. k , j , and zji are hidden variables to be inferred. The generative
process of LDA is:
1. For a topic k, a multinomial parameter k is sampled from Dirichlet prior k Dir(?).
2. For a document j, a multinomial parameter j over the K topics is sampled from Dirichlet
prior j Dir( ).
3. For a word i in document j, a topic label zji is sampled from discrete distribution zji
Discrete( j ).
4. The value wji of word i in document j is sampled from the discrete distribution of topic
zji , wji Discrete( zji ).
zji can be sampled through a Gibbs sampling procedure which integrates out
p(zji = k z
where n
(k)
ji w
ji
w
?)
(k)
n ji wji
+ ?wji
(k)
+
?
n
w
ji w
w=1
W
j
and
(j)
n ji k + k
K (j)
k=1 n ji k+
k
[14].
(1)
k
is the number of words in the corpus with value w assigned to topic k excluding word
(j)
i in document j, and n ji k is the number of words in document j assigned to topic k excluding
word i in document j. Eq 1 is the product of two ratios: the probability of word wji under topic k
and the probability of topic k in document j. So LDA clusters the visual words often co-occurring
in the same images into one object class.
As shown by some examples in Figure 2 (see more results in the experimental section), there are
two problems in using LDA for object segmentation and recognition. The segmentation result is
3
Figure 3: Graphical model of LDA (a) and SLDA (b). See text for details.
noisy since spatial information is not considered. Although LDA assumes that one image contains
multiple topics, from experimental results we observe that the patches in the same image are likely
to have the same labels. Since the whole image is treated as one document, if one object class, e.g.
car in Figure 2, is dominant in the image, the second ratio in Eq 1 will lead to a large bias towards
the car class, and thus the patches of street are also likely to be labeled as car. This problem could
be solved if a local patch only considers its neighboring patches as being in the same document.
4
SLDA
We assume that if visual words are from the same class of objects, they not only often co-occur in the
same images but also are close in space. So we try to group image patches which are close in space
into the same documents. One straightforward way is to divide the image into regions as shown in
Figure 4 (a). Each region is treated as a document instead of the whole image. However, since these
regions are not overlapped, some patches, such as A (red patch) and B (cyan patch) in Figure 4 (a),
even though very close in space, are assigned to different documents. In Figure 4 (a), patch A on
the cow is likely to be labeled as grass, since most other patches in its document are grass. To solve
this problem, we may put many overlapped regions, each of which is a document, on the images as
shown in Figure 4 (b). If a patch is inside a region, it ?could? belong to that document. Any two
patches whose distance is smaller than the region size ?could? belong to the same document if the
regions are placed densely enough. We use the word ?could? because each local patch is covered
by several regions, so we have to decide to which document it belongs. Different from the LDA
model, in which the word-document relationship is known a priori, we need a generative procedure
assigning words to documents. If two patches are closer in space, they have a higher probability
to be assigned to the same document since there are more regions covering both of them. Actually
we can go even further. As shown in Figure 4 (c), each document can be represented by a point
(marked by magenta circle) in the image, assuming its region covers the whole image. If an image
patch is close to a document, it has a high probability to be assigned to that document.
The graphical model is shown in Figure 3 (b). In SLDA, there are M documents and N words in the
corpus. A hidden variable di indicates which document word i is assigned to. For each document
j there is a hyperparameter cdj = gjd , xdj , yjd known a priori. gjd is the index of the image where
document j is placed and xdj , yjd is the location of the document. For a word i, in addition to the
observed word value wi , its location (xi , yi ) and image index gi are also observed and stored in
variable ci = (gi , xi , yi ). The generative procedure of SLDA is:
1. For a topic k, a multinomial parameter ?k is sampled from Dirichlet prior ?k ? Dir(?).
4
Figure 4: There are several ways to add spatial information among image patches when designing
documents. (a): Divide the image into regions without overlapping. Each region, marked by a
dashed window, corresponds to a document. Image patches inside the region are assigned to the
corresponding document. (b): densely put overlapped regions over images. One image patch is
covered by multiple regions. (c): Each document is associated with a point (marked in magenta
color). These points are densely placed over the image. If a image patch is close to a document, it
has a high probability to be assigned to that document.
2. For a document j, a multinomial parameter j over the K topics is sampled from Dirichlet
prior j Dir( ).
3. For a word (image patch) i, a random variable di is sampled from prior p(di ?) indicating
to which document word i is assigned. We choose p(di ?) as a uniform prior.
4. The image index and location of word i is sampled from distribution p(ci cddi ). We may
choose this as a Gaussian kernel.
2
2
d d d
xddi xi + yddi yi
) ?gdd (gi ) exp
p((gi xi yi ) gdi xdi ydi
2
i
p(ci cddi ) = 0 if the word and the document are not in the same image.
5. The topic label zi of word i is sampled from the discrete distribution of document di ,
zi Discrete( di ).
6. The value wi of word i is sampled from the discrete distribution of topic zi , wi
Discrete( zi ).
4.1
Gibbs Sampling
zi and di can be sampled through a Gibbs sampling procedure integrating out
the conditional distribution of zi given di is the same as in LDA.
p(zi = k di = j d
(k)
iw
(j)
n ik
where n
i
z
i
?)
w
k
and
(j)
ik + k
K (j)
k=1 n i k+
(k)
i wi + ?wi
W (k)
w=1 n i w + ?w
j.
In SLDA
n
n
(2)
k
is the number of words in the corpus with value w assigned to topic k excluding word
i, and
is the number of words in document j assigned to topic k excluding word i. This is
easy to understand since if the word-document assignment is fixed, SLDA is the same as LDA.
In addition, we also need to sample di from the conditional distribution given zi .
p di = j zi = k z i d i ci cdj
? ?
p (di = j ?) p ci cdj
p (zi = k z i di = j d i )
p (zi = k z
i
di = j d
p (zi = k z
i
) is obtained by integrating out j .
M
p( j )p(zj ji )d
di = j d i ) =
i
j
j =1
=
M
j =1
K
k=1
K
k=1
5
(
k
k)
(j )
nk + k
K
(j )
K
k=1 nk +
k=1 k
K
k=1
We choose p (di = j|?) as a uniform prior and p ci |cdj , ? as a Gaussian kernel. Thus the conditional distribution of di is
p di = j|zi = k, z?i , d?i , ci , {cdj0 }, ?, ?, ?, ?
2
? ?gjd (gi ) ? e?
(xdj ?xi ) +(yjd ?yi )
?2
2
(j)
n?i,k + ?k
(j)
0
n
+
?
0
0
k
k =1
?i,k
?P
K
(3)
Word i is likely to be assigned to document j if they are in the same image, close in space and word
i has the same topic label as other words in document j. In real applications, we only care about the
distribution of zi while dj can be marginalized by simply ignoring its samples. From Eq 2 and 3,
we observed that a word tends to have the same topic label as other words in its document and words
closer in space are more likely to be assigned to the same documents. So essentially under SLDA a
word tends to be labeled as the same topic as other words close to it. This satisfies our assumption
that visual words from the same object class are closer in space.
Since we densely place many documents over one image, during Gibbs sampling some documents
are only assigned a few words and the distributions cannot be well estimated. To solve this problem
we replicate each image patch to get many particles. These particles have the same word value and
location but can be assigned to different documents and have different labels. Thus each document
will have enough samples of words to estimate the distributions.
4.2
Discussion
SLDA is a flexible model intended to encode spatial structure among image patches and design
documents. If there is only one document placed over one image, SLDA simply reduces to LDA.
If p(ci |cdj ) is an uniform distribution inside a local region, SLDA implements the scheme described
in Figure 4 (b). If these local regions are not overlapped, it is the case of Figure 4 (a). There are
also other possible ways to add spatial information by choosing different spatial priors p(ci |cdj ). In
SLDA, the spatial information is used when designing documents. However the object class model
?k , simply a multinomial distribution over the codebook, has no spatial structure. So the objects of
a class could be in any shape and anywhere in the images, as long as they smoothly distribute in
space. By simply adding a time stamp to ci and cdj , it is easy for SLDA to encode temporal structure
among visual words. So SLDA also can be applied to human action and activity analysis.
5
Experiments
We test LDA and SLDA on the MSRC image dataset [13] with 240 images. Our codebook size is
200 and the topic number is 15. In Figure 2, we show some examples of results using LDA and
SLDA. Colors are used indicate different topics. The results of LDA are noisy and within one image
most of the patches are labeled as one topic. SLDA achieves much better results than LDA. The
results are smoother and objects are well segmented. The detection rate and false alarm rate of four
classes, cows, cars, faces, and bicycles are shown in Table 1. They are counted in pixels. We use the
manual segmentation and labeling in [13] as ground truth.
The two models are also tested on a tiger video sequence with 252 frames. We treat all the frames
in the sequence as an image collection and ignore their temporal order. Figure 5 shows their results
on two sampled frames. Please see the result of the whole video sequence from our website [15].
Using LDA, usually there are one or two dominant topics distributed like noise in a frame. Topics
change as the video background changes. LDA cannot segment out any objects. SLDA clusters
image patches into tigers, rock, water, and grass. If we choose the topic of tiger, as shown in the last
row of Figure 5, all the tigers in the video can be segmented out.
6
Conclusion
We propose a novel Spatial Latent Dirichlet Allocation model which clusters co-occurring and spatially neighboring visual words into the same topic. Instead of knowing word-document assignment
a priori, SLDA has a generative procedure partitioning visual words which are close in space into
the same documents. It is also easy to extend SLDA to including temporal information.
6
Figure 5: Discovering objects from a video sequence. The first column shows two frames in the
video sequence. In the second column, we label the patches in the two frames as different topics
using LDA. The thrid column plots the topic labels using SLDA. The red color indicates the topic
of tigers. In the fourth column, we segment tigers out by choosing the topic marked in red.
Table 1: Detection(D) rate and False Alarm (FA) rate of LDA and SLDA on the MSRC data set
LDA(D)
SLDA(D)
LDA(FA)
SLDA(FA)
7
cows
0.3755
0.5662
0.5576
0.0334
cars
0.5552
0.6838
0.3963
0.2437
faces
0.7172
0.6973
0.5862
0.3714
bicycles
0.5563
0.5661
0.5285
0.4217
Acknowledgement
The authors wish to acknowledge DSO National Laboratory of Singapore for partially supporting
this research.
References
[1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[2] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering object categories in
image collections. In Proc. ICCV, 2005.
[3] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman. Using multiple segmentations to
discover objects and their extent in image collections. In Proc. CVPR, 2006.
[4] L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent object segmentation and
classification. In Proc. ICCV, 2007.
[5] L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In Proc.
CVPR, 2005.
[6] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. In Proc. BMVC, 2006.
[7] X. Wang, X. Ma, and E. Grimson. Unsupervised activity perception by hierarchical bayesian models. In
Proc. CVPR, 2007.
[8] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors and documents.
In Proc. of Uncertainty in Artificial Intelligence, 2004.
[9] D. Blei and J. Lafferty. Dynamic topic models. In Proc. ICML, 2006.
[10] D. Blei and J. Lafferty. Correlated topic models. In Proc. NIPS, 2006.
[11] E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky. Learning hierarchical models of scenes,
objects, and parts. In Proc. ICCV, 2005.
[12] J. Verbeek and B. Triggs. Region classification with markov field aspect models. In Proc. CVPR, 2007.
7
(a)
(b)
(c)
Figure 6: Examples of experimental results on the MSRC image data set. (a): original images; (b):
LDA results; (c) SLDA results.
[13] J. Winn, A. Criminisi, and T. Minka. Object categorization by learned universal visual dictionary. In
Proc. ICCV, 2005.
[14] T. Griffiths and M. Steyvers. Finding scientific topics. In Proc. of the National Academy of Sciences,
2004.
[15] http://people.csail.mit.edu/xgwang/slda.html.
8
| 3278 |@word replicate:1 triggs:1 yjd:3 contains:2 document:76 assigning:1 partition:2 shape:1 plot:1 grass:4 intelligence:2 generative:6 selected:1 website:1 discovering:2 blei:3 quantized:2 codebook:5 location:4 welg:1 ik:2 inside:3 roughly:1 freeman:3 window:2 becomes:2 discover:5 finding:1 nj:1 temporal:8 sky:4 partitioning:1 appear:1 local:12 treat:2 tends:2 encoding:1 co:6 atomic:2 implement:1 procedure:7 universal:1 word:76 integrating:2 griffith:2 get:1 cannot:2 close:12 put:2 straightforward:1 go:1 flexibly:1 utilizing:1 steyvers:2 user:1 smyth:1 us:1 designing:3 overlapped:4 recognition:2 cut:1 labeled:6 observed:4 wang:3 solved:2 capture:1 region:18 russell:2 grimson:2 dynamic:2 solving:3 segment:6 eric:1 represented:1 artificial:2 labeling:4 corresponded:1 neighborhood:1 choosing:2 whose:1 slda:36 widely:2 solve:6 encoded:2 cvpr:4 gi:5 noisy:2 sequence:5 rock:1 propose:3 interaction:1 product:1 neighboring:2 cao:1 academy:1 achievement:1 cluster:10 categorization:2 object:40 eq:3 strong:1 implies:1 indicate:1 filter:1 criminisi:1 human:5 assign:1 clustered:5 niebles:1 extension:1 considered:1 ground:2 exp:1 bicycle:3 efros:2 achieves:3 torralba:1 dictionary:1 proc:13 integrates:2 bag:2 label:9 iw:1 grouped:1 concurrent:1 mit:3 gaussian:3 surveillance:1 encode:2 properly:1 indicates:2 hidden:4 borrowing:1 perona:1 pixel:2 among:9 flexible:2 classification:2 html:1 priori:5 spatial:24 field:3 ng:1 sampling:4 unsupervised:3 icml:1 rosen:1 few:1 densely:5 national:2 intended:1 detection:2 interest:2 closer:3 divide:3 circle:1 classify:2 modeling:1 column:4 cover:1 assignment:4 uniform:3 xdi:1 stored:1 zvi:1 dependency:1 dir:4 csail:3 dso:1 choose:4 derivative:1 distribute:1 includes:1 crowded:1 vehicle:2 try:1 lab:1 red:4 descriptor:4 gdi:1 correspond:1 yellow:1 bayesian:2 researcher:1 manual:2 minka:1 associated:1 di:18 sampled:13 dataset:2 massachusetts:1 knowledge:1 car:6 color:5 segmentation:7 actually:1 higher:1 zisserman:2 bmvc:1 though:2 furthermore:1 anywhere:1 overlapping:1 lda:41 scientific:1 usa:1 building:2 contain:1 assigned:15 spatially:2 laboratory:1 semantic:1 during:1 please:1 covering:1 image:78 meaning:3 novel:1 multinomial:6 ji:8 extend:1 belong:2 cambridge:1 gibbs:4 grid:1 msrc:4 particle:2 language:5 dj:1 moving:1 etc:1 add:2 dominant:2 recent:2 belongs:1 yi:5 wji:6 care:1 employed:1 dashed:2 smoother:1 multiple:3 reduces:1 segmented:3 long:1 divided:1 laplacian:1 verbeek:1 vision:10 essentially:1 kernel:2 addition:2 background:1 winn:1 sudderth:1 meaningless:1 lafferty:2 jordan:1 enough:3 easy:4 zi:14 cow:4 knowing:2 action:5 ignored:1 covered:2 category:4 http:1 exist:1 zj:1 singapore:1 estimated:1 discrete:8 hyperparameter:1 group:1 key:1 four:1 falling:1 utilize:1 graph:1 year:2 fourth:2 uncertainty:1 place:1 reasonable:1 decide:1 patch:48 cyan:1 activity:5 xiaogang:1 occur:2 constraint:1 fei:6 scene:4 encodes:2 xgwang:2 aspect:1 according:1 combination:1 across:1 smaller:1 wi:5 iccv:4 nose:1 gaussians:3 observe:1 hierarchical:3 occurrence:1 convolved:1 original:1 assumes:4 dirichlet:12 clustering:2 graphical:3 marginalized:1 added:2 fa:3 distance:1 street:1 topic:47 considers:1 extent:1 water:1 willsky:1 assuming:1 gjd:3 modeled:3 relationship:1 index:3 ratio:2 gdd:1 design:5 proper:1 markov:2 acknowledge:1 supporting:1 excluding:4 frame:6 inferred:1 sivic:2 coherent:1 learned:1 nip:1 usually:2 perception:1 including:1 video:6 critical:1 difficulty:2 treated:5 cdj:7 natural:1 scheme:1 technology:1 eye:1 created:1 text:2 prior:10 acknowledgement:1 expect:1 allocation:7 bank:1 row:5 placed:4 last:1 bias:1 understand:1 institute:1 face:4 distributed:1 author:4 collection:7 counted:1 far:1 ignore:1 corpus:5 assumed:1 xi:5 latent:8 table:2 promising:1 ignoring:1 whole:7 noise:1 hyperparameters:1 alarm:2 wish:1 stamp:1 third:1 magenta:2 grouping:1 essential:1 xdj:3 false:2 adding:1 ci:10 occurring:4 nk:2 zji:7 spatialtemporal:1 smoothly:1 simply:4 likely:6 visual:25 partially:1 corresponds:1 truth:2 satisfies:1 ma:2 conditional:3 ydi:1 marked:6 goal:2 towards:1 change:3 tiger:6 experimental:4 indicating:1 people:1 tested:1 correlated:2 |
2,512 | 3,279 | Modeling image patches with a directed hierarchy of
Markov random fields
Simon Osindero and Geoffrey Hinton
Department of Computer Science, University of Toronto
6, King?s College Road, M5S 3G4, Canada
osindero,hinton@cs.toronto.edu
Abstract
We describe an efficient learning procedure for multilayer generative models that
combine the best aspects of Markov random fields and deep, directed belief nets.
The generative models can be learned one layer at a time and when learning is
complete they have a very fast inference procedure for computing a good approximation to the posterior distribution in all of the hidden layers. Each hidden layer
has its own MRF whose energy function is modulated by the top-down directed
connections from the layer above. To generate from the model, each layer in turn
must settle to equilibrium given its top-down input. We show that this type of
model is good at capturing the statistics of patches of natural images.
1
Introduction
The soldiers on a parade ground form a neat rectangle by interacting with their neighbors. An officer
decides where the rectangle should be, but he would be ill-advised to try to tell each individual soldier exactly where to stand. By allowing constraints to be enforced by local interactions, the officer
enormously reduces the bandwidth of top-down communication required to generate a familiar pattern. Instead of micro-managing the soldiers, the officer specifies an objective function and leaves
it to the soldiers to optimise that function. This example of pattern generation suggests that a multilayer, directed belief net may not be the most effective way to generate patterns. Instead of using
shared ancestors to create correlations between the variables within a layer, it may be more efficient
for each layer to have its own energy function that is modulated by directed, top-down input from
the layer above. Given the top-down input, each layer can then use lateral interactions to settle on
a good configuration and this configuration can then provide the top-down input for the next layer
down. When generating an image of a face, for example, the approximate locations of the mouth
and nose might be specified by a higher level and the local interactions would then ensure that the
accuracy of their vertical alignment was far greater than the accuracy with which their locations
were specified top-down.
In this paper, we show that recently developed techniques for learning deep belief nets (DBN?s) can
be generalized to solve the apparently more difficult problem of learning a directed hierarchy of
Markov Random Fields (MRF?s). The method we describe can learn models that have many hidden
layers, each with its own MRF whose energy function is conditional on the values of the variables in
the layer above. It does not require detailed prior knowledge about the data to be modeled, though
it obviously works better if the architecture and the types of latent variable are well matched to the
task.
1
2
Learning deep belief nets: An overview
The learning procedure for deep belief nets has now been described in several places (Hinton et al.,
2006; Hinton and Salakhutdinov, 2006; Bengio et al., 2007) and will only be sketched here. It relies
on a basic module, called a restricted Boltzmann machine (RBM) that can be trained efficiently
using a method called ?contrastive divergence? (Hinton, 2002).
2.1
Restricted Boltzmann Machines
An RBM consists of a layer of binary stochastic ?visible? units connected to a layer of binary,
stochastic ?hidden? units via symmetrically weighted connections. A joint configuration, (v, h) of
the visible and hidden units has an energy given by:
X
X
X
E(v, h) = ?
bi vi ?
bj h j ?
vi hj wij
(1)
i?visibles
j?hiddens
i,j
where vi , hj are the binary states of visible unit i and hidden unit j, bi , bj are their biases and wij is
the symmetric weight between them. The network assigns a probability to every possible image via
this energy function and the probability of a training image can be raised by adjusting the weights
and biases to lower the energy of that image and to raise the energy of similar, reconstructed images
that the network would prefer to the real data.
Given a P
training vector, v, the binary state, hj , of each feature detector, j, is set to 1 with probability
?(bj + i vi wij ), where ?(x) is the logistic function 1/(1 + exp(?x)), bj is the bias of j, vi is
the state of visible unit i, and wij is the weight between i and j. Once binary states have been
chosen P
for the hidden units, a reconstruction is produced by setting each vi to 1 with probability
?(bi + j hj wij ). The states of the hidden units are then updated once more so that they represent
features of the reconstruction. The change in a weight is given by
?wij = ?(hvi hj idata ? hvi hj irecon )
(2)
where ? is a learning rate, hvi hj idata is the fraction of times that visible unit i and hidden units j
are on together when the hidden units are being driven by data and hvi hj irecon is the corresponding
fraction for reconstructions. A simplified version of the same learning rule is used for the biases.
The learning works well even though it is not exactly following the gradient of the log probability
of the training data (Hinton, 2002).
2.2
Compositions of experts
A single layer of binary features is usually not the best way to capture the structure in the data. We
now show how RBM?S can be composed to create much more powerful, multilayer models.
After using an RBM to learn the first layer of hidden features we have an undirected model that
defines p(v, h) via the energy function in Eq. 1. We can also think of the model as defining p(v, h)
by defining a consistent pair of conditional probabilities, p(h|v) and p(v|h) which can be used to
sample from the model distribution. A different way to express what has been learned is p(v|h)
and p(h). Unlike a standard directed model, this p(h) does not have its own separate parameters.
It is a complicated, non-factorial prior on h that is defined implicitly by the weights. This peculiar
decomposition into p(h) and p(v|h) suggests a recursive algorithm: keep the learned p(v|h) but
replace p(h) by a better prior over h, i.e. a prior that is closer to the average, over all the data
vectors, of the conditional posterior over h.
We can sample from this average conditional posterior by simply applying p(h|v) to the training
data. The sampled h vectors are then the ?data? that is used for training a higher-level RBM that
learns the next layer of features. We could initialize the higher-level RBM model by using the same
parameters as the lower-level RBM but with the roles of the hidden and visible units reversed. This
ensures that p(v) for the higher-level RBM starts out being exactly the same as p(h) for the lowerlevel one. Provided the number of features per layer does not decrease, Hinton et al. (2006) show
that each extra layer increases a variational lower bound on the log probability of the data.
The directed connections from the first hidden layer to the visible units in the final, composite
graphical model are a consequence of the the fact that we keep the p(v|h) but throw away the p(h)
defined by the first level RBM. In the final composite model, the only undirected connections are
2
between the top two layers, because we do not throw away the p(h) for the highest-level RBM. To
suppress noise in the learning signal, we use the real-valued activation probabilities for the visible
units of all the higher-level RBM?s, but to prevent hidden units from transmitting more than one bit
of information from the data to its reconstruction, we always use stochastic binary values for the
hidden units.
3
Semi-restricted Boltzmann machines
For contrastive divergence learning to work well, it is important for the hidden units to be sampled
from their conditional distribution given the data or the reconstructions. It not necessary, however,
for the reconstructions to be sampled from their conditional distribution given the hidden states. All
that is required is that the reconstructions have lower free energy than the data. So it is possible to
include lateral connections between the visible units and to create reconstructions by taking a small
step towards the conditional equilibrium distribution given the hidden states. If we are using meanfield activities for the reconstructions, we can move towards the equilibrium distribution by using a
few damped mean-field updates (Welling and Hinton, 2002). We call this a semi-restricted Boltzmann machine (SRBM). The visible units form a conditional MRF with the biases of the visible
units being determined by the hidden states. The learning procedure for the visible to hidden connections is unaffected and the same learning procedure applies to the lateral connections. Explicitly,
the energy function for a SRBM is given by
X
X
X
X
E(v, h) = ?
bi vi ?
bj h j ?
vi hj wij ?
vi vi? Lii?
(3)
i?visibles
j?hiddens
i,j
i<i?
and the update rule for the lateral connections is
?Lii? = ?(hvi vi? idata ? hvi vi? irecon )
(4)
Semi-restricted Boltzmann machines can be learned greedily and composed to form a directed hierarchy of conditional MRF?s. To generate from the composite model we first get an equilbrium
sample from the top level SRBM and then we get an equilibrium sample from each lower level MRF
in turn, given the top-down input from the sample in the layer above. The settling at each intermediate level does not need to explore a highly multi-modal energy landscape because the top-down
input has already selected a good region of the space. The role of the settling is simply to sharpen
the somewhat vague top-down specification and to ensure that the resulting configuration repects
learned constraints. Each intermediate level fills in the details given the larger picture defined by the
level above.
4
Inference in a directed hierarchy of MRF?s
In a deep belief network, inference is very simple and very fast because of the way in which the
network is learned. Rather than first deciding how to represent the data and then worrying about inference afterwards, deep belief nets restrict themselves to learning representations for which accurate
variational inference can be done in a single bottom-up pass. Each layer computes an approximate
sample from its posterior distribution given the activities in the layer below. This can be done with
a single matrix multiply using the bottom-up ?recognition? connections that were originally learned
by an RBM but are no longer part of the generative model. The recognition connections compute
an approximation to the product of a data-dependent likelihood term coming from the layer below
and a data-independent prior term that depends on the learned parameters of all the higher layers.
Each of these two terms can contain strong correlations, but the way in which the model is learned
ensures that these correlations cancel each other out so that the true posterior distribution in each
layer is very close to factorial and very simple to compute from the activities in the layer below.
The inference process is unaltered by adding an MRF at each hidden layer. The role of the MRF?s
is to allow the generative process to mimic the constraints that are obeyed by the variables within a
layer when the network is being driven bottom-up by data. During inference, these constraints are
enforced by the data. From a biological perspective, it is very important for perceptual inference to
be fast and accurate, so it is very good that it does not involve any kind of iterative settling or belief
propagation. The MRF?s are vital for imposing constraints during generation and for whitening the
3
learning signal so that weak higher-order structure is not masked by strong pairwise correlations.
During perceptual inference, however, the MRF?s are mere spectators.
5
Whitening without waiting
Data is often whitened to prevent strong pairwise correlations from masking weaker but more interesting structure. An alternative to whitening the data is to modify the learning procedure so that
it acts as if the data were whitened and ignores strong pairwise correlations when learning the next
level of features. This has the advantage that perceptual inference is not slowed down by an explicit
whitening stage. If the lateral connections ensure that a pairwise correlation in the distribution of the
reconstructions is the same as in the data distribution, that correlation will be ignored by contrastive
divergence since the learning is driven by the differences between the two distributions. This also
explains why different hidden units learn different features even when they have the same connectivity: once one feature has made one aspect of the reconstructions match the data, there is no longer
any learning signal for another hidden unit to learn that same aspect.
Figure 1 shows how the features learned by the hidden units are affected by the presence of lateral
connections between the visible units. Hidden units are no longer required for modeling the strong
pairwise correlations between nearby pixels so they are free to discover more interesting features
than the simple on-center off-surround fetaures that are prevalent when there are no connections
between visible units.
(A)
(B)
Figure 1: (A) A random sample of the filters learned by an RBM trained on 60,000 images of handwritten digits from the MNIST database (see Hinton et al. (2006) for details). (B) A random sample
of the filters learned by an SRBM trained on the same data. To produce each reconstruction, the
SRBM used 5 damped mean-field iterations with the top-down input from the hidden states fixed.
Adding lateral connections between the visible units changes the types of hidden features that are
learned. For simplicity each visible unit in the SRBM was connected to all 783 other visible units,
but only the local connections developed large weights and the lateral connections to each pixel
formed a small on-center off-surround field centered on the pixel. Pixels close to the edge of the
image that were only active one or two times in the whole training set behaved very differently:
They learned to predict the whole of the particular digit that caused them to be active.
6
Modeling patches of natural images
To illustrate the advantages of adding lateral connections to the hidden layers of a DBN we use
the well-studied task of modelling the statistical structure of images of natural scenes (Bell and
Sejnowski, 1997; Olshausen and Field, 1996; Karklin and Lewicki, 2005; Osindero et al., 2006; Lyu
and Simoncelli, 2006). Using DBN?s, it is easy to build overcomplete and hierchical generative
models of image patches. These are able to capture much richer types of statistical dependency than
traditional generative models such as ICA. They also have the potential to go well beyond the types
of dependencies that can be captured by other, more sophisticated, multi-stage approaches such as
(Karklin and Lewicki, 2005; Osindero et al., 2006; Lyu and Simoncelli, 2006).
6.1
Adapting Restricted Boltzmann machines to real-valued data
Hinton and Salakhutdinov (2006) show how the visible units of an RBM can be modified to allow it
to model real-valued data using linear visible variables with Gaussian noise, but retaining the binary
stochastic hidden units. The learning procedure is essentially unchanged especially if we use the
mean-field approximation for the visible units which is what we do.
4
Two generative DBN models, one with and one without lateral connectivity, were trained using
the updates from equations 2 and 4. The training data used consisted of 150,000 20 ? 20 patches
extracted from images of natural scenes taken from the collection of Van Hateren1 . The raw image intensities were pre-processed using a standard set of operations ? namely an initial logtransformation, and then a normalisation step such that each pixel had zero-mean across the training
set. The patches were then whitened using a Zero-Phase Components analysis (ZCA) filter-bank.
The set of whitening filters is obtained by rotating the data into a co-ordinate system aligned with
the eigenvectors of the covariance matrix, then rescaling each component by the inverse square-root
of the correspoding eigenvalue, then rotating back into the original pixel co-ordinate system.
Using ZCA has a similar effect to learning lateral connections between pixels (Welling and Hinton,
2002). We used ZCA whitened data for both models to make it clear that the advantage of lateral
connections is not just caused by their ability to whiten the input data. Because the data was whitened
we did not include lateral connections in the bottom layer of the lateral DBN. The results presented
in the figures that follow are all shown in ?unwhitened pixel-space?, i.e. the effects of the whitening
filter are undone for display purposes.
The models each had 2000 units in the first hidden layer, 500 in the second hidden layer and 1000
units in the third hidden layer. The generative abilities of both models are very robust against
variations in the number of hidden units in each layer, though it seems to be important for the
top layer to be quite large. In the case where lateral connections were used, the first and second
hidden layers of the final, composite model were fully laterally connected.
Data was taken in mini-batches of size 100, and training was performed for 50 epochs for the first
layer and 30 epochs for the remaining layers. A learning rate of 10?3 was used for the interlayer
connections, and half that rate for the lateral connections. Multiplicative weight decay of 10?2 multiplied by the learning rate was used, and a momentum factor of 0.9 was employed. When training
the higher-level SRBM?s in the model with lateral connectivity, 30 parallel mean field updates were
used to produce the reconstructions with the top-down input from the hidden states held constant.
Each mean field update set the new activity of every ?visible? unit to be 0.2 times the previous activity plus 0.8 times the value computed by applying the logistic function to the total input received
from the hidden units and the previous states of the visible units.
Learned filters
Figure 2 shows a random sample of the filters learned using an RBM with Gaussian visible units.
These filters are the same for both models. This representation is 5? overcomplete.
Figure 2: Filters from the first hidden layer. The results are generally similar to previous work on
learning representations of natural image patches. The majority of the filters are tuned in location,
orientation, and spatial frequency. The joint space of location and orientation is approximately
evenly tiled and the spatial frequency responses span a range of about four octaves.
6.1.1
Generating samples from the model
The same issue that necessitates the use of approximations when learning deep-networks ? namely
the unknown value of the partition function ? also makes it difficult to objectively assess how well
they fit the data in the absence of predictive tasks such as classification. Since our main aim is to
demonstrate the improvement in data modelling ability that lateral connections bring to DBN?s, we
simply present samples from similarly structured models, with and without lateral connections, and
compare these samples with real data.
1
http://hlab.phys.rug.nl/imlib/index.html
5
Ten-thousand data samples were generated by randomly initialising the top-level (S)RBM states and
then running 300 iterations of a Gibbs sampling scheme between the top two layers. For models
without lateral connections, each iteration of the scheme consisted of a full parallel-update of the
top-most layer followed by a full parallel-update of the penultimate layer. In models with lateral
connections, each iteration consisted of a full parallel-update of the top-most layer followed by 50
rounds of sequential stochastic updates of each unit in the penultimate layer, under the influence of
the previously sampled top-layer states. (A different random ordering of units was drawn in each
update-round.) After running this Markov Chain we then performed an ancestral generative pass
down to the data layer. In the case of models with no lateral connections, this simply involved
sampling from the factorial conditional distribution at each layer. In the case of models with lateral
connections we performed 50 rounds of randomly-ordered, sequential stochastic updates under the
influence of the top-down inputs from the layer above. In both cases, on the final hidden layer update
before generating the pixel values, mean-field updates were used so that the data was generated using
the real-valued probabilities in the first hidden layer rather than stochastic-binary states.
(A)
(B)
(C)
(D)
Figure 3: (A) Samples from a model without lateral connections. (B) Samples from a model with
lateral connections. (C) Examples of actual data, drawn at random. (D) Examples of actual data,
chosen to have closest cosine distance to samples from panel (B).
Figure 3 shows that adding undirected lateral interactions within each intermediate hidden layer of
a deep belief net greatly improves the model?s ability to generate image patches that look realistic.
It is evident from the figure that the samples from the model with lateral connections are much more
similar to the real data and contain much more coherent, long-range structure. Belief networks with
only directed connections have difficulty capturing spatial constraints between the parts of an image
because, in a directed network, the only way to enforce constraints is by using observed descendants.
Unobserved ancestors can only be used to model shared sources of variation.
6.1.2
Marginal and pairwise statistics
In addition to the largely subjective comparisons from the previous section, if we perform some
simple aggregate analyses of the synthesized data we see that the samples from the model with
lateral connections are objectively well matched to those from true natural images. In the right-hand
column of figure 4 we show histograms of pixel inensities for real data and for data generated by the
6
two models. The kurtosis is 8.3 for real data, 7.3 for the model with lateral connections, and 3.4 for
the model with no lateral connections. If we make a histogram of the outputs of all of the filters in
the first hidden layer of the model, we discover that the kurtosis is 10.5 on real data, 10.3 on image
patches generated by the model with lateral connections, and 3.8 on patches generated by the other
model.
Columns one through five of figure 4 show the distributions of the response of one filter conditional
on the response of a second filter. Again, for image patches generated with lateral connections the
statistics are similar to the data and without lateral connections they are quite different.
Figure 4: Each row shows statistics computed from a different set of 10,000 images. The first
row is for real data. The second row is for image patches generated by the model with lateral
interactions. The third row is for patches generated without lateral interactions. Column six shows
histograms of pixel intensities. Columns 1-5 show conditional filter responses, in the style suggested
in (Wainwright and Simoncelli, 2000), for two different gabor filters applied to the sampled images.
In columns 1-3 the filters are 2, 4, or 8 pixels apart. In column 4 they are at the same location but
orthogonal orientations. In column 5 they are at the same location and orientation but one octave
apart in spatial frequency.
7
Discussion
Our results demonstrate the advantages of using semi-restricted Boltzmann machines as the building
blocks when building deep belief nets. The model with lateral connections is very good at capturing
the statistical structure of natural image patches. In future work we hope to exploit this in a number
of image processing tasks that require a good prior over image patches.
The models presented in this paper had complete lateral connectivity ? largely for simplicity in
MATLAB. Such a strategy would not be feasible were we to significantly scale up our networks.
Fortunately, there is an obvious solution to this ? we can simply restrict the majority of lateral
interactions to a local neighbourhood and concomittently have the hidden units focus their attention
on spatially localised regions of the image. A topographic ordering would then exist throughout the
various layers of the hierarchy. This would greatly reduce the computational load and it corresponds
to a sensible prior over image structures, especially if the local regions get larger as we move up the
hierarchy. Furthermore, it would probably make the process of settling within a layer much faster.
One limitation of the model we have described is that the top-down effects can only change the
effective biases of the units in the Markov random field at each level. The model becomes much
7
more powerful if top-down effects can modulate the interactions. For example, an ?edge? can be
viewed as a breakdown in the local correlational structure of the image: pixel intensities cannot be
predicted from neighbors on the other side of an object boundary. A hidden unit that can modulate
the pairwise interactions rather than just the biases can form a far more abstract representation of an
edge that is not tied to any particular contrast or intensity (Geman and Geman, 1984). Extending our
model to this type of top-down modulation is fairly straightforward. Instead of using weights wij
that contribute energies ?vi vj wij we use weights wijk that contribute energies ?vi vj hk wijk . This
allows the binary state of hk to gate the effective weight between visible units i and j. Memisevic
and Hinton (2007) show that the same learning methods can be applied with a single hidden layer and
there is no reason why such higher-order semi-restricted Boltzmann machines cannot be composed
into deep belief nets.
Although we have focussed on the challenging task of modeling patches of natural images, we
believe the ideas presented here are of much more general applicability. DBN?s without lateral connections have produced state of the art results in a number of domains including document retrieval
(Hinton and Salakhutdinov, 2006), character recognition (Hinton et al., 2006), lossy image compression (Hinton and Salakhutdinov, 2006), and the generation of human motion (Taylor et al., 2007).
Lateral connections may help in all of these domains.
Acknowledgments
We are grateful to the members of the machine learning group at the University of Toronto for
helpful discussions. This work was supported by NSERC, CFI and CIFAR. GEH is a fellow of
CIFAR and holds a CRC chair.
References
Bell, A. J. and Sejnowski, T. J. (1997). The ?independent components? of natural scenes are edge
filters. Vision Research, 37(23):3327?3338.
Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of
deep networks. In B., S., Platt, J., and Hoffman, T., editors, Advances in Neural Information
Processing Systems 19. MIT Press, Cambridge, MA.
Geman, S. and Geman, D. (1984). Stochastic relaxation, gibbs distributions and the bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell, 6.
Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural
Computation, 14(8):1711?1800.
Hinton, G. E., Osindero, S., and Teh, Y. W. (2006). A fast learning algorithm for deep belief nets.
Neural Computation, 18.
Hinton, G. E. and Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313.
Karklin, Y. and Lewicki, M. (2005). A hierarchical bayesian model for learning nonlinear statistical
regularities in nonstationary natural signals. Neural Computation, 17(2).
Lyu, S. and Simoncelli, E. (2006). Statistical modeling of images with fields of gaussian scale
mixtures. In Advances Neural Information Processing Systems, volume 19.
Memisevic, R. F. and Hinton, G. E. (2007). Unsupervised learning of image transformations. In
Computer Vision and Pattern Recognition. IEEE Computer Society.
Olshausen, B. A. and Field, D. J. (1996). Emergence of simple-cell receptive field properties by
learning a sparse code for natural images. Nature, 381(6583):607?609. JUN 13 NATURE.
Osindero, S., Welling, M., and Hinton, G. E. (2006). Topographic product models applied to natural
scene statistics. Neural Computation, 18(2).
Taylor, G. W., Hinton, G. E., and Roweis, S. (2007). Modeling human motion using binary latent
variables. In B., S., Platt, J., and Hoffman, T., editors, Advances in Neural Information Processing
Systems 19. MIT Press, Cambridge, MA.
Wainwright, M. and Simoncelli, E. (2000). Scale mixtures of Gaussians and the statistics of natural
images. In Advances Neural Information Processing Systems, volume 12, pages 855?861.
Welling, M. and Hinton, G. E. (2002). A new learning algorithm for mean field boltzmann machines.
In International Joint Conference on Neural Networks, Madrid.
8
| 3279 |@word unaltered:1 version:1 compression:1 seems:1 decomposition:1 covariance:1 contrastive:4 initial:1 configuration:4 tuned:1 document:1 subjective:1 activation:1 must:1 realistic:1 visible:24 partition:1 update:13 generative:9 leaf:1 selected:1 half:1 greedy:1 contribute:2 toronto:3 location:6 five:1 descendant:1 consists:1 combine:1 interlayer:1 g4:1 pairwise:7 ica:1 themselves:1 multi:2 salakhutdinov:5 actual:2 becomes:1 provided:1 discover:2 matched:2 panel:1 what:2 kind:1 developed:2 unobserved:1 transformation:1 fellow:1 every:2 act:1 visibles:2 exactly:3 laterally:1 platt:2 unit:45 before:1 local:6 modify:1 consequence:1 srbm:7 mach:1 modulation:1 advised:1 approximately:1 might:1 plus:1 studied:1 suggests:2 challenging:1 co:2 bi:4 range:2 directed:12 acknowledgment:1 recursive:1 block:1 digit:2 procedure:7 cfi:1 bell:2 adapting:1 composite:4 undone:1 gabor:1 pre:1 road:1 significantly:1 get:3 cannot:2 close:2 applying:2 influence:2 center:2 go:1 attention:1 straightforward:1 simplicity:2 assigns:1 rule:2 fill:1 lamblin:1 variation:2 updated:1 hierarchy:6 recognition:4 breakdown:1 geman:4 database:1 bottom:4 role:3 module:1 observed:1 capture:2 thousand:1 region:3 ensures:2 connected:3 ordering:2 decrease:1 highest:1 trained:4 raise:1 grateful:1 predictive:1 vague:1 necessitates:1 joint:3 differently:1 various:1 fast:4 describe:2 effective:3 sejnowski:2 tell:1 aggregate:1 whose:2 richer:1 larger:2 solve:1 valued:4 quite:2 ability:4 statistic:6 objectively:2 topographic:2 think:1 emergence:1 final:4 obviously:1 advantage:4 eigenvalue:1 net:10 kurtosis:2 reconstruction:13 interaction:9 product:3 coming:1 aligned:1 roweis:1 regularity:1 extending:1 produce:2 generating:3 object:1 help:1 illustrate:1 received:1 eq:1 strong:5 throw:2 c:1 predicted:1 larochelle:1 filter:17 stochastic:8 centered:1 human:2 settle:2 explains:1 require:2 equilbrium:1 crc:1 biological:1 hold:1 ground:1 exp:1 deciding:1 equilibrium:4 lyu:3 bj:5 predict:1 hvi:6 purpose:1 create:3 weighted:1 hoffman:2 hope:1 mit:2 always:1 gaussian:3 aim:1 modified:1 rather:3 hj:9 focus:1 improvement:1 prevalent:1 likelihood:1 modelling:2 greatly:2 contrast:1 hk:2 greedily:1 zca:3 helpful:1 inference:10 dependent:1 hidden:44 ancestor:2 wij:9 sketched:1 pixel:13 issue:1 ill:1 orientation:4 classification:1 html:1 retaining:1 art:1 raised:1 spatial:4 initialize:1 fairly:1 marginal:1 field:16 once:3 sampling:2 look:1 cancel:1 unsupervised:1 mimic:1 future:1 micro:1 few:1 randomly:2 composed:3 divergence:4 intell:1 individual:1 familiar:1 phase:1 normalisation:1 highly:1 multiply:1 wijk:2 alignment:1 mixture:2 nl:1 damped:2 held:1 chain:1 accurate:2 peculiar:1 edge:4 closer:1 necessary:1 orthogonal:1 taylor:2 rotating:2 overcomplete:2 column:7 modeling:6 restoration:1 applicability:1 masked:1 osindero:6 obeyed:1 dependency:2 hiddens:2 international:1 ancestral:1 memisevic:2 off:2 together:1 transmitting:1 connectivity:4 again:1 imlib:1 lii:2 expert:2 style:1 rescaling:1 potential:1 explicitly:1 caused:2 vi:14 depends:1 performed:3 try:1 root:1 multiplicative:1 apparently:1 start:1 complicated:1 parallel:4 masking:1 simon:1 ass:1 formed:1 square:1 accuracy:2 largely:2 efficiently:1 landscape:1 weak:1 handwritten:1 raw:1 bayesian:2 produced:2 mere:1 m5s:1 unaffected:1 detector:1 phys:1 against:1 energy:13 frequency:3 involved:1 obvious:1 rbm:16 sampled:5 adjusting:1 knowledge:1 improves:1 dimensionality:1 sophisticated:1 back:1 higher:9 originally:1 follow:1 modal:1 response:4 done:2 though:3 furthermore:1 just:2 stage:2 correlation:9 hand:1 nonlinear:1 propagation:1 defines:1 logistic:2 behaved:1 believe:1 lossy:1 olshausen:2 building:2 effect:4 contain:2 true:2 consisted:3 spatially:1 symmetric:1 round:3 during:3 whiten:1 cosine:1 generalized:1 octave:2 evident:1 complete:2 demonstrate:2 motion:2 bring:1 image:36 variational:2 wise:1 recently:1 overview:1 volume:2 he:1 synthesized:1 composition:1 surround:2 imposing:1 gibbs:2 cambridge:2 dbn:7 similarly:1 sharpen:1 had:3 specification:1 longer:3 whitening:6 posterior:5 own:4 closest:1 perspective:1 driven:3 apart:2 binary:11 captured:1 greater:1 somewhat:1 rug:1 fortunately:1 employed:1 managing:1 signal:4 semi:5 afterwards:1 simoncelli:5 full:3 reduces:1 match:1 faster:1 long:1 retrieval:1 cifar:2 mrf:11 basic:1 multilayer:3 whitened:5 essentially:1 vision:2 iteration:4 represent:2 histogram:3 cell:1 addition:1 source:1 extra:1 unlike:1 probably:1 undirected:3 member:1 call:1 nonstationary:1 symmetrically:1 presence:1 intermediate:3 bengio:2 vital:1 easy:1 fit:1 architecture:1 bandwidth:1 restrict:2 reduce:1 idea:1 six:1 matlab:1 deep:12 ignored:1 generally:1 detailed:1 involve:1 eigenvectors:1 factorial:3 clear:1 ten:1 processed:1 generate:5 specifies:1 http:1 exist:1 per:1 geh:1 waiting:1 express:1 officer:3 affected:1 four:1 group:1 drawn:2 idata:3 prevent:2 rectangle:2 worrying:1 relaxation:1 fraction:2 enforced:2 inverse:1 powerful:2 place:1 throughout:1 patch:16 prefer:1 initialising:1 bit:1 capturing:3 layer:58 bound:1 followed:2 display:1 activity:5 constraint:7 scene:4 nearby:1 aspect:3 span:1 chair:1 department:1 structured:1 across:1 character:1 slowed:1 restricted:8 taken:2 equation:1 previously:1 turn:2 nose:1 operation:1 gaussians:1 multiplied:1 hierarchical:1 away:2 enforce:1 neighbourhood:1 alternative:1 batch:1 gate:1 original:1 top:24 remaining:1 ensure:3 include:2 running:2 graphical:1 exploit:1 build:1 especially:2 society:1 unchanged:1 objective:1 parade:1 move:2 already:1 strategy:1 receptive:1 traditional:1 gradient:1 reversed:1 separate:1 distance:1 lateral:40 penultimate:2 majority:2 sensible:1 evenly:1 reason:1 code:1 modeled:1 index:1 mini:1 minimizing:1 difficult:2 localised:1 suppress:1 anal:1 boltzmann:9 unknown:1 perform:1 allowing:1 teh:1 vertical:1 markov:5 defining:2 hinton:22 communication:1 interacting:1 canada:1 intensity:4 ordinate:2 pair:1 required:3 specified:2 namely:2 connection:42 coherent:1 learned:16 trans:1 able:1 beyond:1 suggested:1 usually:1 pattern:5 below:3 hlab:1 optimise:1 including:1 belief:13 mouth:1 wainwright:2 meanfield:1 natural:13 irecon:3 settling:4 difficulty:1 karklin:3 scheme:2 picture:1 jun:1 prior:7 epoch:2 popovici:1 fully:1 generation:3 interesting:2 limitation:1 geoffrey:1 lowerlevel:1 consistent:1 editor:2 bank:1 row:4 supported:1 free:2 neat:1 bias:7 allow:2 weaker:1 side:1 neighbor:2 face:1 taking:1 focussed:1 sparse:1 van:1 boundary:1 stand:1 computes:1 ignores:1 made:1 collection:1 simplified:1 far:2 welling:4 reconstructed:1 approximate:2 implicitly:1 keep:2 decides:1 active:2 latent:2 iterative:1 why:2 learn:4 nature:2 robust:1 domain:2 vj:2 soldier:4 did:1 main:1 whole:2 noise:2 madrid:1 enormously:1 momentum:1 explicit:1 perceptual:3 tied:1 third:2 learns:1 down:19 load:1 decay:1 mnist:1 adding:4 sequential:2 unwhitened:1 simply:5 explore:1 ordered:1 nserc:1 lewicki:3 applies:1 corresponds:1 relies:1 extracted:1 ma:2 conditional:12 modulate:2 viewed:1 king:1 towards:2 shared:2 replace:1 absence:1 change:3 feasible:1 determined:1 reducing:1 correlational:1 called:2 total:1 pas:2 tiled:1 college:1 modulated:2 |
2,513 | 328 | RecNorm: Simultaneous Normalisation and
Classification applied to Speech Recognition
John S. Bridle
Royal Signals and Radar Est.
Great Malvern
UK WR143PS
Stephen J. Cox
British Telecom Research Labs.
Ipswich
UK IP57RE
Abstract
A particular form of neural network is described, which has terminals
for acoustic patterns, class labels and speaker parameters. A method of
training this network to "tune in" the speaker parameters to a particular
speaker is outlined, based on a trick for converting a supervised network
to an unsupervised mode. We describe experiments using this approach
in isolated word recognition based on whole-word hidden Markov models.
The results indicate an improvement over speaker-independent performance and, for unlabelled data, a performance close to that achieved on
labelled data.
1
INTRODUCTION
We are concerned to emulate some aspects of perception. In particular, the way that
a stimulus which is ambiguous, perhaps because of unknown lighting conditions, can
become unambiguous in the context of other such stimuli: the fact that they are
subject to tbe same unknown conditions gives our perceptual apparatus enough
constraints to solve tbe problem. Individual words are often ambiguous even to
human listeners. For instance a Cockney might say the word "ace" to sound the
same as a Standard English speaker's "ice". Similarly with "room" and "rum", or
"work" and "walk" ill other pairs of British English accents. If we heard one of these
ambiguous pronunciations, knowing nothing else about the speaker we could not tell
which word had been said. For current automatic speech recognition (ASR) systems
such effects are much more frequent, because we do not know bow to concentrate
on the important aspects of the signal locally, nor how to exploit the fact that some
unknown properties apply to w hole words, nor how to bring to bear on the task of
234
RecNorm
acoustic disambiguation all the information that is normally latent in the context
of the utterance.
Most attempts to construct ASR systems which can be used by many persons have
used so-called speaker-independent models. When decoding a short sequence of
words there is no way of imposing our knowledge that all the speech is uttered by
one person.
To enable adaptation using small amounts of speech from a new speaker we propose
to factor the speech knowledge into speaker-independent models, continous speakerspecific parameters and a transformation which modifies the models according to
the speaker parameters. (In this paper we shall only use transformations which can
just as easily be applied to the input patterns.) We are specially interested in the
possibility of estimating such parameters from quite small amounts of unlabelled
speech, such as a few short words or one longer word. Although the types of models
and transformations we have used are very simple, we hope the general approach
will be applicable to quite sophisticated models and transformations which will be
necessary for future high-performance speech recognition systems.
2
2.1
AN ADAPTIVE NETWORK APPROACH
GENERAL IDEA
Suppose we had a feed-forward network with three (vector-valued) terminals, which
encapsulates our knowledge of the relationship between acoustic patterns, X, class
labels (e.g. word identities) C, and speaker parameters, Q.
Training such a network seems difficult, because although we can supply (X ,C)
pairs, we do not know the appropriate values of Q. (We only know the names of
the speakers, or perhaps some phonetician's descriptive labels.)
In training the network we start with default values of Q, feed forward from X and
Q to C, bac.k-propagate derivatives to internal parameters of the network (weights,
transition probabilities, etc.) and also to the Qs, enforcing the constraint that the
Qs for anyone speaker stay equal. We can imagine one copy of the network for
each utterance, with the Q terminals of networks dealing with the same speaker
strapped together. One convenient implementation (for a small number of training
speakers) is to adapt one Q vector per speaker in a set of weights from one-from-N
coded speaker identity inputs to linear units, as we shall see later.
Once the network is trained we have two modes of use. If we have available one
or more known utterances by a new speaker, then we can "tune-in" to the speaker
(as during training) except that only the Q inputs are adjusted. The case of most
interest in this paper, however, is when we have a few unknown words from an
unknown speaker. We set up a Q-strapped set of networks, one for each word,
initialise the Q values to their defaults, propagate forwards to produce a set of
distributions across word labels, and then we use a technique which tends to sharpen
these distributions. In the simplest case, the sharpening process could be a matter
of: for each utterance pick the word label with the largest output, and assuming
it to be correct back-propagate derivatives to the common Q. In practice, we can
use a gentler method in which large outputs get most 'encouragement'. For some
235
236
Bridle and Cox
networks it is possible to show that such a "phantom target" procedure can lead
to hillclimbing on the likelihood of the data given an assumption about the form of
the generator of the data (see Appendix).
2.2
SIMPLE NETWORK ILLUSTRATION
We have explored these ideas using a very simple network based on that in figure
1. It can be viewed either as a feedforward network with radial (minus Euclidean
distance squared) units and a generalised-logistic (Softmax) output non-linearity,
or as a Gaussian classifier in which the covariance matrices are unit diagonal (see
[Bri90bJ). Training is done by gradient-based optimisation, using back-propagation
of partial derivatives. During training the criterion is based on relative entropy
(likelihood of the targets given the network outputs) [Bri90c). (Such di~criminative
training can lead to different results from the usual model-based methods [Bri90b],
which in this case would set the reference points at the data means for each class.)
This simple classifier network is preceded by a full linear transformation (6 parameters), so the equivalent model-based classifier has Gaussian distributions with the
same arbitrary covariance matrix for each class. We use the biasses of the linear
units as speaker parameters, so the weights from speaker identity inputs go straight
into the hidden units, ~ as figure 2 .
During adaptation to a new speaker from unlabelled tokens, the speaker parameters
of the transformation are allowed to adapt, but the ("phantom") targets are derived
from the outputs themselves (the targets are just double the outputs) so that the
largest outputs are encouraged.
In figure 3 we see the adaptation of the positions of the reference points of the
radial units in figure 2 when the input points are essentially the 6 reference points
displaced to one side (to represent one example of each word spoken by a new
speaker). Adaptation based on tentative classifications pulls the reference points
towards a position where the inputs can be given confident, consistent labels.
3
SPEECH RECOGNITION EXPERIMENTS
We have applied these ideas to the problem of recognising a few short, confusable
words from a known set, all spoken by the same unknown speaker. If our method
works we should be able to recognise each word better (on average) if we also look
at a few other unknown words from the same speaker.
The dataset [SaI89], which had been recorded previously for other purposes, comprised the British English isolated pronounciations of the names of the letters of
the alphabet, each spoken 3 times by each speaker. The 104 speakers were divided
into two groups of 52 (Train and Test), balanced for age and sex. Initial acoustic
analysis produced 28-component spectrum vectors, 100 per second. In place of the
2-D input patterns discussed above, each speech pattern was a variable-duration
sequence (typically 50) of 28-vectors.
In place of each simple Gaussian density class-models we used a set of Gaussian
densities and a matrix of probabilities of transitions between them. Each classmodel is thus a hidden Markov model (HMM) of a word. We used 26 HMMs, each
RecNorm
237
Radial
Units
Fig.1 Feedforward Network
Implementing Simple Gaussian
Classifier
Fig.2 Gaussian classifier network with
input transformation and speaker inputs
15%
14%
13%
12%
11%
10%
3
No Adapt.
Fig.3 Adaptation to 6 displaced
points
10
20
78
Words given
'Cheat'
mode
Fig.4 Average error rates for alphabet
word recognition
238
Bridle and Cox
with 15 states, each with a 3-component Gaussian mixture output distribution. For
further details see [CB90].
The equivalent to the evaluation of a Gaussian density in the simple network is
the Forward (or Alpha) computation of the likelihood of the data given a (hidden
Markov) model. This calculation can be thought of as being performed by a
recurrent network of a special form.
When we include the Bayes inversion to
produce probabilities of the classes (this is a normalisation if we assume equal prior
probabilities) we obtain the equivalent of the simple network of figure 1, which we
call an Alphanet[Bri90a].
In place of the 2-component linear transformation in figure 2 we use a constrained
linear transformation based on [Hun81] Yi = ai~i-l + bi~i + Ci~i+l + d il where
~i, i = 1, ... 28, is the log spectrum amplitude in frequency channel i.
We tried three conditions:
? Bias Only: a = 0, b = 1,
? Fixed Shift:
ai
=
a,
bi =
C
= 0 (28 parameters)
b, Ci
=
C
(31 parameters)
? Variable Shift: the general c.ase (107 parameters)
Figure 4 shows average word error rates for the three types of transformation, for
different numbers of utterances taken together (N = 3,10,20,78). N = 1 is the
non-adaptive case. 'Cheat' Mode is a check on the power of the transformations:
for each test speaker, all 78 utterances were used to set the parameters of the
transformation, then recognition performance was measured using those parameters
of the same utterances.
We see:
? Use of unsupervised adaptation reduced the error rates.
? The reductions are not spectacular (15% errors to 12% errors, a reduction in
error rate by 20%.) but they are statistically significant and may be practically
significant too.
? The performance in 'Cheat' Mode is only a little better than in unsupervised
mode, so performance is being limited by the power of the transformation.
? The Fixed Shift transformation gives quite good results even on only 3 words
at a time.
When tested on a 120 talker telephone-line database of isolated digits collected
at British Telecom, the best unsupervised speaker adaptation technique gave a
37% decrease in error-rate (for both supervised and unsupervised adaptation on 5
utterances) using a simple front-end consisting of 8 MFCCs (mel frequency-scale
cepstrum coefficients). A more sophisticated front-end (using differential information and energy) improved the unadapted performance by 63% over the 8 MFCC
front-end. Using this front-end, the best unsupervised adaptation technique (on 5
utterances) decreased the error-rate by a further 25%
RecNorm
4
CONCLUSIONS
The results reported here show that simultaneous word recognition and speaker
normalisation can be made to work, that it improves performance over the corresponding speaker-independent version, and that given 3 to 10 unknown words
performance can be almost as good as when the adaptation is done using knowledge of the word identities. The main extensions we are interested in are to use
non-linear transformations, and to learn low-dimensional but effective speaker parameterisations.
A
Unsupervised Adaptation using Phantom Targets
We aim to motivate the 'phantom target' trick of feeding back twice each output of
the network as a target.
Suppose we have a classifier network, with a 1-from-N output coding, and a Softmax
output nonlinearity. We write Qj for an output value, Vi for an input to the Softmax
output stage, 113 for the input to the network, c for a class and 8 for parameters
which we may want to adjust. A typical output value is
Qj (113,8)
= e Vi (113,8) /
LeVie (113,8).
Ie
The output values are interpretable as estimates of posterior probabilities: Qj ::::::
Pr(c = j 1113,8). For the next step we assume there are some implicit probability
density functions Pj (113,8) :::::: Pr( 2 Ic =j, 8) Assuming equal prior probabilities of
the classes for simplicity, Bayes rule gives
N
Qj(2, 8)
= Pj(lI3, 8) /
L Pie (113, 8),
1e=1
so we suppose that
P'(2 8) = _1_eVi(II3,8)
l'
Zj(8)
,
where the normalisation is
Zj(8) =
f
eVi(II3,8)d2.
In the networks we use, the same normalisation applies to all the classes, so we write
zj(8) = z(8).
A maximum-likelihood approach to unsupervised adaptation maximises the likelihood of the data given the set of (equally probable) distributions, which is
N
1
P (113, 8) = L Pie ( 2, 8) N'
1e=1
It is simpler to maximise the log likelihood:
L(II3, 8) ~ log P( 2,8)
= log L
Ie
Pie (2, 8)-log N
= log L
Ie
e VIe(2, 8) -log z(8)-log N.
239
240
Bridle and Cox
We shall need
8L ____l___ eltj(z, 8) __1_ 8z(8) .
8ltj - Lie eVIe(z, 8)
z(8) 8ltj(z,8)"
(The likelihood of the whole training set is the product of the likelihoods of the
individual patterns, and the log turns the product into a sum, so we can sum the
deri vati ves of Lover thf' training set.)
We can often assume that the normalisation is independent of 8, giving
8L
eltj(z,8)
- ---- -- ------- -- = Qj(z, 8).
8Vj
Lie eVIe(z, 8)
If we have a supervised backprop network using the relative entropy based criterion
(rather than squared error) [1], we are minimising J = - Lj Tj 10gQj, where Tj is
the target for the jth output. We know [Bri90b] that :~.
Qj - Tj , so if we set
T j = 2Qj we have
:t
= ---
J
g{; , and minimising J
J
=
is equivalent to maximising L.
J
For the simple Gaussian network of figure 1, this unsupervised adaptation, applied to the reference points, can be understood as an on-line, gradient descent,
relative of the k-means cluster analysis procedure, or of the LBG vector quantiser
design method, or indeed of Kohonen's feature map (without the neighbourhood
constraints) .
Copyright
?
Controller HMSO London 1989
References
[Bri90a] J S Bridle. Alphanets: a recurrent 'neural' network architecture with
a hidden Markov model interpretation. Speech Communication, Special
"Neurospeech" issue, February 1990.
[Bri90b] J S Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In
F Fougelman-Soulie and J Herault, editors, N euro-computing: algorithms,
architectures and applications, NATO ASI Series on Systems and computer
science. Springer-Verlag, 1990.
[Bri90c] J S Bridle. Training stochastic model recognition algorithms as networks
can lead to maximum mutual information estimation of parameters. In
Advances in Neural Information Processing Systems 2. Morgan Kaufmann,
1990.
[CB90] S J Cox and J S Bridle. Simultaneous speaker normalisation and utterance
labelling using Bayesian/neural net techniques. In Proc. IEEE Int. ConJ.
Acoustics Speech and Signal Processing, 1990.
[Hun81] M J Hunt.
Speaker adaptation for word-based speech recognition.
J. Acoust. Soc. Amer, 69:S41-S42, 1981. (abstract only).
[SaI89] J A S Salter. The RT5233 Alphabetic database for the Connex project.
Technical Report RT52/G231/89, BT Technology Executive, 1989.
| 328 |@word cox:5 version:1 inversion:1 seems:1 sex:1 d2:1 propagate:3 tried:1 covariance:2 pick:1 minus:1 reduction:2 initial:1 series:1 current:1 gqj:1 john:1 interpretable:1 short:3 simpler:1 become:1 supply:1 differential:1 li3:1 indeed:1 themselves:1 nor:2 terminal:3 little:1 project:1 estimating:1 linearity:1 spoken:3 acoust:1 transformation:15 sharpening:1 classifier:6 uk:2 normally:1 unit:7 ice:1 generalised:1 maximise:1 vie:1 understood:1 apparatus:1 tends:1 cheat:3 might:1 twice:1 hmms:1 limited:1 recnorm:4 hunt:1 bi:2 statistically:1 practice:1 digit:1 procedure:2 speakerspecific:1 thought:1 asi:1 convenient:1 word:27 radial:3 hmso:1 get:1 close:1 context:2 spectacular:1 equivalent:4 phantom:4 map:1 uttered:1 modifies:1 go:1 duration:1 simplicity:1 q:2 rule:1 pull:1 initialise:1 imagine:1 suppose:3 target:8 trick:2 recognition:11 database:2 decrease:1 balanced:1 radar:1 trained:1 motivate:1 easily:1 emulate:1 listener:1 alphabet:2 train:1 describe:1 effective:1 london:1 tell:1 pronunciation:1 quite:3 ace:1 solve:1 valued:1 say:1 sequence:2 descriptive:1 net:1 propose:1 product:2 adaptation:14 frequent:1 kohonen:1 bow:1 alphabetic:1 ltj:2 double:1 cluster:1 produce:2 recurrent:2 connex:1 measured:1 soc:1 indicate:1 concentrate:1 correct:1 stochastic:1 human:1 enable:1 implementing:1 backprop:1 feeding:1 probable:1 adjusted:1 extension:1 practically:1 ic:1 great:1 unadapted:1 talker:1 purpose:1 estimation:1 proc:1 applicable:1 label:6 largest:2 hope:1 alphanet:1 gaussian:9 aim:1 rather:1 derived:1 improvement:1 evie:2 likelihood:8 check:1 typically:1 lj:1 bt:1 hidden:5 interested:2 ii3:3 issue:1 classification:3 ill:1 herault:1 constrained:1 softmax:3 special:2 mutual:1 equal:3 construct:1 asr:2 once:1 encouraged:1 look:1 unsupervised:9 future:1 report:1 stimulus:2 few:4 individual:2 consisting:1 attempt:1 normalisation:7 interest:1 possibility:1 evaluation:1 adjust:1 mixture:1 copyright:1 tj:3 partial:1 necessary:1 conj:1 euclidean:1 walk:1 confusable:1 isolated:3 instance:1 comprised:1 cb90:2 too:1 front:4 reported:1 confident:1 person:2 density:4 ie:3 stay:1 probabilistic:1 decoding:1 together:2 squared:2 recorded:1 derivative:3 s41:1 coding:1 coefficient:1 matter:1 int:1 vi:2 later:1 performed:1 lab:1 start:1 bayes:2 il:1 kaufmann:1 bayesian:1 produced:1 lighting:1 mfcc:1 straight:1 simultaneous:3 energy:1 frequency:2 di:1 bridle:8 dataset:1 knowledge:4 improves:1 amplitude:1 sophisticated:2 back:3 feed:2 supervised:3 improved:1 cepstrum:1 amer:1 done:2 just:2 stage:1 implicit:1 propagation:1 accent:1 mode:6 logistic:1 perhaps:2 pronounciations:1 name:2 effect:1 deri:1 s42:1 during:3 ambiguous:3 speaker:37 unambiguous:1 gentler:1 mel:1 criterion:2 bring:1 common:1 preceded:1 discussed:1 interpretation:2 significant:2 imposing:1 ai:2 encouragement:1 automatic:1 outlined:1 similarly:1 sharpen:1 nonlinearity:1 had:3 mfccs:1 longer:1 etc:1 posterior:1 verlag:1 yi:1 morgan:1 converting:1 signal:3 stephen:1 full:1 sound:1 technical:1 unlabelled:3 adapt:3 calculation:1 minimising:2 divided:1 equally:1 ase:1 coded:1 vati:1 controller:1 optimisation:1 essentially:1 represent:1 achieved:1 want:1 decreased:1 else:1 specially:1 subject:1 lover:1 quantiser:1 call:1 feedforward:3 enough:1 concerned:1 gave:1 architecture:2 idea:3 knowing:1 shift:3 qj:7 f:1 speech:12 heard:1 tune:2 amount:2 locally:1 bac:1 simplest:1 reduced:1 zj:3 per:2 write:2 shall:3 group:1 pj:2 tbe:2 sum:2 letter:1 place:3 almost:1 recognise:1 disambiguation:1 appendix:1 constraint:3 aspect:2 anyone:1 according:1 across:1 encapsulates:1 pr:2 taken:1 previously:1 turn:1 know:4 end:4 available:1 apply:1 appropriate:1 neighbourhood:1 evi:2 include:1 exploit:1 giving:1 february:1 usual:1 diagonal:1 said:1 gradient:2 distance:1 hmm:1 collected:1 enforcing:1 assuming:2 maximising:1 relationship:2 illustration:1 difficult:1 pie:3 implementation:1 design:1 unknown:8 maximises:1 displaced:2 markov:4 descent:1 communication:1 arbitrary:1 pair:2 continous:1 tentative:1 acoustic:5 able:1 pattern:7 perception:1 royal:1 power:2 technology:1 thf:1 utterance:10 prior:2 relative:3 bear:1 generator:1 age:1 executive:1 consistent:1 editor:1 token:1 copy:1 english:3 jth:1 bias:2 side:1 soulie:1 default:2 rum:1 transition:2 forward:4 made:1 adaptive:2 alpha:1 nato:1 dealing:1 spectrum:2 latent:1 channel:1 learn:1 vj:1 main:1 whole:2 alphanets:1 nothing:1 allowed:1 fig:4 malvern:1 telecom:2 euro:1 position:2 lbg:1 lie:2 perceptual:1 british:4 explored:1 recognising:1 ci:2 labelling:1 hole:1 entropy:2 hillclimbing:1 applies:1 springer:1 identity:4 viewed:1 towards:1 labelled:1 room:1 telephone:1 except:1 typical:1 called:1 est:1 parameterisations:1 internal:1 tested:1 neurospeech:1 |
2,514 | 3,280 | Compressed Regression
Shuheng Zhou? John Lafferty?? Larry Wasserman??
? Computer
Science Department
of Statistics
? Machine Learning Department
? Department
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
Recent research has studied the role of sparsity in high dimensional regression and
signal reconstruction, establishing theoretical limits for recovering sparse models
from sparse data. In this paper we study a variant of this problem where the
original n input variables are compressed by a random linear transformation to
m n examples in p dimensions, and establish conditions under which a sparse
linear model can be successfully recovered from the compressed data. A primary
motivation for this compression procedure is to anonymize the data and preserve
privacy by revealing little information about the original data. We characterize
the number of random projections that are required for `1 -regularized compressed
regression to identify the nonzero coefficients in the true model with probability approaching one, a property called ?sparsistence.? In addition, we show that
`1 -regularized compressed regression asymptotically predicts as well as an oracle linear model, a property called ?persistence.? Finally, we characterize the
privacy properties of the compression procedure in information-theoretic terms,
establishing upper bounds on the rate of information communicated between the
compressed and uncompressed data that decay to zero.
1
Introduction
Two issues facing the use of statistical learning methods in applications are scale and privacy. Scale
is an issue in storing, manipulating and analyzing extremely large, high dimensional data. Privacy
is, increasingly, a concern whenever large amounts of confidential data are manipulated within an
organization. It is often important to allow researchers to analyze data without compromising the
privacy of customers or leaking confidential information outside the organization. In this paper we
show that sparse regression for high dimensional data can be carried out directly on a compressed
form of the data, in a manner that can be shown to guard privacy in an information theoretic sense.
The approach we develop here compresses the data by a random linear or affine transformation,
reducing the number of data records exponentially, while preserving the number of original input
variables. These compressed data can then be made available for statistical analyses; we focus on
the problem of sparse linear regression for high dimensional data. Informally, our theory ensures
that the relevant predictors can be learned from the compressed data as well as they could be from
the original uncompressed data. Moreover, the actual predictions based on new examples are as
accurate as they would be had the original data been made available. However, the original data
are not recoverable from the compressed data, and the compressed data effectively reveal no more
information than would be revealed by a completely new sample. At the same time, the inference
algorithms run faster and require fewer resources than the much larger uncompressed data would
require. The original data need not be stored; they can be transformed ?on the fly? as they come in.
1
In more detail, the data are represented as a n ? p matrix X . Each of the p columns is an attribute,
and each of the n rows is the vector of attributes for an individual record. The data are compressed
by a random linear transformation X 7? e
X ? 8X , where 8 is a random m ? n matrix with
m n. It is also natural to consider a random affine transformation X 7? e
X ? 8X + 1, where 1
is a random m ? p matrix. Such transformations have been called ?matrix masking? in the privacy
literature [6]. The entries of 8 and 1 are taken to be independent Gaussian random variables, but
other distributions are possible. We think of e
X as ?public,? while 8 and 1 are private and only
needed at the time of compression. However, even with 1 = 0 and 8 known, recovering X from
e
X requires solving a highly under-determined linear system and comes with information theoretic
privacy guarantees, as we demonstrate.
In standard regression, a response variable Y = X? + ? Rn is associated with the input variables,
where i are independent, mean zero additive noise variables. In compressed regression, we assume
e ? Rm given by Y 7?
that the response is also compressed, resulting in the transformed response Y
e
e
Y ? 8Y = 8X? + 8 = X ? + e
. Note that under compression, e
i , i ? {1, . . . , m}, in the
transformed noise e
= 8 are no longer independent. In the sparse setting, the parameter ? ? R p
is sparse, with a relatively small number s = k?k0 of nonzero coefficients in ?. The method we
focus on is `1 -regularized least squares, also known as the lasso [17]. We study the ability of the
compressed lasso estimator to identify the correct sparse set of relevant variables and to predict well.
We omit details and technical assumptions in the following theorems for clarity. Our first result
shows that the lasso is sparsistent under compression, meaning that the correct sparse set of relevant
variables is identified asymptotically.
Sparsistence
(Theorem 3.3): If the number of compressed examples m satisfies C1 s 2 log nps ?
?
m ? C2 n/ log n , and the regularization parameter ?m satisfies ?m ? 0 and m?2m / log p ?
e? e
em = arg min? 1 kY
?, then the compressed lasso estimator ?
X ?k22 + ?m k?k1 is sparsistent:
2m
em ) = supp (?) ? 1 as m ? ?, where supp(?) = {j : j 6= 0}.
P supp (?
Our second result shows that the lasso is persistent under compression. Roughly speaking, persistence [10] means that the procedure predicts well, as measured by the predictive risk R(?) =
2
E Y ? ? T X , where X ? R p is a new input vector and Y is the associated response. Persistence is
a weaker condition than sparsistency, and in particular does not assume that the true model is linear.
Persistence (Theorem 4.1): Given a sequence of sets of estimators Bn,m ? R p such that Bn,m =
e =
{? : k?k1 ? L n,m } with log2 (np) ? m ? n , the sequence of compressed lasso estimators ?
2n,m
2
T
e? e
arg min k?k1 ?L n,m kY
X ?k2 is persistent with the predictive risk R(?) = E Y ? ? X over
P
en,m ) ? infk?k1 ?L n,m R(?) ?? 0, as
uncompressed data with respect to Bn,m , meaning that R(?
1/4
n ? ?, in case L n,m = o (m/ log(np)) .
Our third result analyzes the privacy properties of compressed regression. We evaluate privacy in
information theoretic terms by bounding the average mutual information I ( e
X ; X )/np per matrix
entry in the original data matrix X , which can be viewed as a communication rate. Bounding this
mutual information is intimately connected with the problem of computing the channel capacity of
certain multiple-antenna wireless communication systems [13].
Information Resistence (Propositions 5.1 and 5.2): The rate at which information about X is
e
revealed by the compressed data e
X satisfies rn,m = sup I (Xnp; X ) = O mn ? 0, where the
supremum is over distributions on the original data X .
As summarized by these results, compressed regression is a practical procedure for sparse learning
in high dimensional data that has provably good properties. Connections with related literature are
briefly reviewed in Section 2. Analyses of sparsistence, persistence and privacy properties appear in
Section 3?5. Simulations for sparsistence and persistence of the compressed lasso are presented in
Section 6. The proofs are included in the full version of the paper, available at http://arxiv.
org/abs/0706.0534.
2
2
Background and Related Work
In this section we briefly review related work in high dimensional statistical inference, compressed
sensing, and privacy, to place our work in context.
Sparse Regression. An estimator that has received much attention in the recent literature is the
bn [17], defined as ?
bn = arg min 1 kY ? X?k2 + ?n k?k1 , where ?n is a regularization paramlasso ?
2
2n
eter. In [14] it was shown that the lasso is consistent in the high dimensional setting under certain
assumptions. Sparsistency proofs for high dimensional problems have appeared recently in [20]
and [19]. The results and method of analysis of Wainwright [19], where X comes from a Gaussian
ensemble and i is i.i.d. Gaussian, are particularly relevant to the current paper. We describe this
Gaussian Ensemble result, and compare our results to it in Sections 3, 6.Given that under compression, the noise e
= 8 is not i.i.d, one cannot simply apply this result to the compressed case.
Persistence for the lasso was first defined and studied by Greenshtein and Ritov in [10]; we review
their result in Section 4.
Compressed Sensing. Compressed regression has close connections to, and draws motivation from
compressed sensing [4, 2]. However, in a sense, our motivation is the opposite of compressed
sensing. While compressed sensing of X allows a sparse X to be reconstructed from a small number
of random measurements, our goal is to reconstruct a sparse function of X . Indeed, from the point
of view of privacy, approximately reconstructing X , which compressed sensing shows is possible
if X is sparse, should be viewed as undesirable; we return to this point in Section ??. Several
authors have considered variations on compressed sensing for statistical signal processing tasks
[5, 11]. They focus on certain hypothesis testing problems under sparse random measurements, and
a generalization to classification of a signal into two or more classes. Here one observes y = 8x,
where y ? Rm , x ? Rn and 8 is a known random measurement matrix. The problem is to select
ei : y = 8(si + ). The proofs use concentration properties of random
between the hypotheses H
projection, which underlie the celebrated Johnson-Lindenstrauss lemma. The compressed regression
problem we introduce can be considered as a more challenging statistical inference task, where the
problem is to select from an exponentially large set of linear models, each with a certain set of
relevant variables with unknown parameters, or to predict as well as the best linear model in some
class.
Privacy. Research on privacy in statistical data analysis has a long history, going back at least to [3].
We refer to [6] for discussion and further pointers into this literature; recent work includes [16]. The
work of [12] is closely related to our work at a high level, in that it considers low rank random linear
transformations of either the row space or column space of the data X . The authors note the JohnsonLindenstrauss lemma, and argue heuristically that data mining procedures that exploit correlations
or pairwise distances in the data are just as effective under random projection. The privacy analysis
is restricted to observing that recovering X from e
X requires solving an under-determined linear
system. We are not aware of previous work that analyzes the asymptotic properties of a statistical
estimator under random projection in the high dimensional setting, giving information-theoretic
guarantees, although an information-theoretic quantification of privacy was proposed in [1]. We
cast privacy in terms of the rate of information communicated about X through e
X , maximizing over
all distributions on X , and identify this with the problem of bounding the Shannon capacity of a
multi-antenna wireless channel, as modeled in [13]. Finally, it is important to mention the active
area of cryptographic approaches to privacy from the theoretical computer science community, for
instance [9, 7]; however, this line of work is quite different from our approach.
3
Compressed Regression is Sparsistent
In the standard setting, X is a n ? p matrix, Y = X? + is a vector of noisy observations under
a linear model, and p is considered to be a constant. In the high-dimensional setting we allow p to
grow with n. The lasso refers to the following: (P1 ) min kY ? X?k22 such that k?k1 ? L. In
1
Lagrangian form, this becomes: (P2 ) min 2n
kY ? X?k22 + ?n k?k1 . For an appropriate choice of
the regularization parameter ? = ?(Y, L), the solutions of these two problems coincide.
In compressed regression we project each column X j ? Rn of X to a subspace of m dimensions,
using an m ? n random projection matrix 8. Let e
X = 8X be the compressed design matrix, and
3
e = 8Y be the compressed response. Thus, the transformed noise e
let Y
is no longer i.i.d.. The
e = 8X? + 8 = 8 e
em
compressed lasso is the following optimization problem, for Y
X +e
, with
being the set of optimal solutions:
e2 ) min
(a) ( P
1 e e 2
1 e e 2
em = arg min
kY ? X ?k2 + ?m k?k1 , (b)
kY ? X ?k2 + ?m k?k1 .
2m
2m
??R p
(1)
Although sparsistency is the primary goal in selecting the correct variables, our analysis establishes
conditions for the stronger property of sign consistency:
Definition 3.1. (Sign Consistency)
A set of estimators n is sign consistent with the true ? if
bn ? n s.t. sgn(?
bn ) = sgn(?) ? 1 as n ? ?, where sgn(?) is given by sgn(x) = 1, 0, and
P ??
?1 for x >, =, or < 0 respectively.
As a shorthand, denote the event that a?sign
consistent solution
bn ) = sgn(? ? ) := ??
b ? n such that sgn(?
b) = sgn(? ) .
exists with E sgn(?
Clearly, if a set of estimators is sign consistent then it is sparsistent.
All recent work establishing results on sparsity recovery assumes some form of incoherence condition on the data matrix X . To formulate such a condition, it is convenient to introduce an additional
piece of notation. Let S = {j : ? j 6= 0} be the set of relevant variables and let S c = {1, . . . , p} \ S
be the set of irrelevant variables. Then X S and X S c denote the corresponding sets of columns of the
matrix X . We will impose the following incoherence
condition; related conditions are used by [18]
Pp
in a deterministic setting. Let kAk? = maxi j=1 |Ai j | denote the matrix ?-norm.
Definition 3.2. (S-Incoherence) Let X be an n ? p matrix and let S ? {1, . . . , p} be nonempty.
We say that X is S-incoherent in case
1 T
(2)
n X S c X S
+
n1 X ST X S ? I|S|
? 1 ? ?, for some ? ? (0, 1].
?
?
Although not explicitly required, we only apply this definition to X such that columns of X satisfy
2
X j
= 2(n), ? j ? {1, . . . , p}. We can now state our main result on sparsistency.
2
Theorem 3.3. Suppose that, before compression, Y = X? ? + , where each column of X is
normalized to have `2 -norm n , and ? ? N (0, ? 2 In ). Assume that X is S -incoherent, where S =
e= e
supp (? ? ), and define s = |S| and ?m = mini?S |?i? |. We observe, after compression, Y
X ? ? +e
,
1
e
e
e
e
where Y = 8Y , X = 8X , and e
= 8 , where 8i j ? N (0, n ). Let ?m ? m as in (1b). If
r
16C1 s 2
n
4C2 s
(ln
p
+
2
log
n
+
log
2(s
+
1))
?
m
?
(3)
+
?
16 log n
?2
?
with C1 = ?4e ? 2.5044 and C2 = 8e ? 7.6885, and ?m ? 0 satisfies
6?
(r
)
m?2 ?2m
1
log s
1 T
(a)
? ?, and (b)
+ ?m
( n X S X S )?1
? 0.
(4)
?
log( p ? s)
?m
m
em ) = supp (?) ? 1 as m ? ?.
Then the compressed lasso is sparsistent: P supp (?
4
Compressed Regression is Persistent
Persistence (Greenshtein and Ritov [10]) is a weaker condition than sparsistency. In particular, the
assumption that E(Y |X ) = ? T X is dropped. Roughly speaking, persistence implies that a procedure
predicts well. We review the arguments in [10] first; we then adapt it to the compressed case.
Uncompressed Persistence. Consider a new pair (X, Y ) and suppose we want to predict Y from X .
The predictive risk using predictor ? T X is R(?) = E(Y ? ? T X )2 . Note that this is a well-defined
quantity even though we do not assume that E(Y |X ) = ? T X . It is convenient to rewrite the risk in
the following way: define Q = (Y, X 1 , . . . , X p ) and ? = (?1, ?1 , . . . , ? p )T , then
R(?) = ? T 6? , where 6 = E(Q Q T ).
4
(5)
Let Q = (Q ?1 Q ?2 ? ? ? Q ?n )T , where Q i? = (Yi , X 1i , . . . , X pi )T ? Q, ?i = 1, . . . , n are i.i.d. random
vectors and the training error is
n
X
1
bn (?) = 1
b n ? , where 6
b n = QT Q.
R
(Yi ? X iT ?)2 = ? T 6
n
n
(6)
i=1
Given Bn = {? : k?k1 ? L n } for L n = o (n/ log n)1/4 , we define the oracle predictor ??,n =
bn (?).
bn = arg min k?k ?L R
arg min k?k1 ?L n R(?), and the uncompressed lasso estimator ?
n
1
Assumption 1. Suppose that, for each j and k, E |Z |q ? q!M q?2 s/2, for every q ? 2 and some
constants M and s, where Z = Q j Q k ? E(Q j Q k ), where Q j , Q k denote elements of Q.
Following arguments in [10], it can be shown that under Assumption
1 and given a sequence of sets
of estimators Bn = {? : k?k1 ? L n } for L n = o (n/ log n)1/4 , the sequence of uncompressed
P
bn (?) is persistent, i.e., R(?
bn = arg min ??B R
bn ) ? R(??,n ) ?
lasso estimators ?
0.
n
Compressed Persistence.
For the compressed case, again we want to predict (X, Y ), but
bn,m is based on the lasso from the compressed data of size m n . Let ? =
now the estimator ?
bn with
(?1, ?1 , . . . , ? p )T as before and we replace R
1 T T
bn,m (?) = ? T 6
b n,m ? , where 6
b n,m =
R
Q 8 8Q.
mn
(7)
1/4
mn
Given compressed sample size m n , let Bn,m = {? : k?k1 ? L n,m }, where L n,m = o log(np
.
n)
We define the compressed oracle predictor ??,n,m = arg min ? : k?k1 ?L n,m R(?) and the compressed
bn,m (?).
bn,m = arg min ? : k?k ?L R
lasso estimator ?
n,m
1
Theorem 4.1. Under Assumption 1, we further assume that there exists a constant M1 > 0 such
that E(Q 2j ) < M1 , ? j , where Q j denotes the j th element of Q . For any sequence Bn,m ? R p with
log2 (npn ) ? m n ? n, where Bn,m consists of all coefficient vectors ? such that k?k1 ? L n,m =
bn,m (?)
bn,m = arg min ??B R
o (m n / log(npn ))1/4 , the sequence of compressed lasso procedures ?
n,m
c
P
bn,m ) ? R(??,n,m ) ?
is persistent: R(?
0, when pn = O en for c < 1/2.
The main difference between the sequence of compressed lasso estimators and the original uncompressed sequence is that n and m n together define the sequence of estimators for the compressed
data. Here m n is allowed to grow
from (log2 (np)) to n; hence for each fixed n,
2
bn,m , ?m n such that log (np) < m n ? n defines a subsequence of estimators. In Section 6 we
?
illustrate the compressed lasso persistency via simulations to compare the empirical risks with the
oracle risks on such a subsequence for a fixed n.
5
Information Theoretic Analysis of Privacy
Next we derive bounds on the rate at which the compressed data e
X reveal information about the
uncompressed data X . Our general approach is to consider the mapping X 7? 8X + 1 as a noisy
communication channel, where the channel is characterized by multiplicative noise 8 and additive
noise 1. Since the number of symbols in X is np we normalize by this effective block length to
e
define the information rate rn,m per symbol as rn,m = sup p(X ) I (Xnp; X ) . Thus, we seek bounds on
the capacity of this channel. A privacy guarantee is given in terms of bounds on the rate rn,m ? 0
decaying to zero. Intuitively, if the mutual information satisfies I (X ; e
X ) = H (X ) ? H (X | e
X ) ? 0,
then the compressed data e
X reveal, on average, no more information about the original data X than
could be obtained from an independent sample.
The underlying channel is equivalent to the multiple antenna model for wireless communication
[13], where there are n transmitter and m receiver antennas in a Raleigh flat-fading environment.
The propagation coefficients between pairs of transmitter and receiver antennas are modeled by the
matrix entries 8i j ; they remain constant for a coherence interval of p time periods. Computing the
5
channel capacity over multiple intervals requires optimization of the joint density of pn transmitted
signals, the problem studied in [13]. Formally, theP
channel is modeled as Z = 8X + ? 1, where
n
? > 0, 1i j ? N (0, 1), 8i j ? N (0, 1/n) and n1 i=1
E[X i2j ] ? P, where the latter is a power
constraint.
Theorem 5.1. Suppose that E[X 2j ] ? P and the compressed data are formed by Z = 8X + ? 1,
where 8 is m ?n with independent entries 8i j ? N (0, 1/n) and 1 is m ? p with independent
entries
1i j ? N (0, 1). Then the information rate rn,m satisfies rn,m = sup p(X )
I (X ; Z )
np
?
m
n
log 1 +
P
?2
.
This result is implicitly contained in [13]. When 1 = 0, or equivalently ? = 0, which is the
case assumed in our sparsistence and persistence results, the above analysis yields the trivial bound
rn,m ? ?. We thus derive a separate bound for this case; however, the resulting asymptotic order
of the information rate is the same.
Theorem 5.2. Suppose that E[X 2j ] ? P and the compressed data are formed by Z = 8X , where
8 is m ? n with independent entries 8i j ? N (0, 1/n). Then the information rate rn,m satisfies
m
log (2?e P) .
rn,m = sup p(X ) I (Xnp; Z ) ? 2n
Under our sparsistency lower bound on m, the above upper bounds are rn,m = O(log(np)/n). We
note that these bounds may not be the best possible since they are obtained assuming knowledge of
the compression matrix 8, when in fact the privacy protocol requires that 8 and 1 are not public.
6
Experiments
In this section, we report results of simulations designed to validate the theoretical analysis presented
in previous sections. We first present results that show the compressed lasso is comparable to the
uncompressed lasso in recovering the sparsity pattern of the true linear model. We then show results
on persistence that are in close agreement with the theoretical results of Section 4. We only include
Figures 1?2 here; additional plots are included in the full version.
Sparsistency. Here we run simulations to compare the compressed lasso with the uncompressed
lasso in terms of the probability of success in recovering the sparsity pattern of ? ? . We use random
matrices for both X and 8, and reproduce the experimental conditions of [19]. A design parameter
is the compression factor f = mn , which indicates how much the original data are compressed.
The results show that when the compression factor f is large enough, the thresholding behaviors
as specified in (8) and (9) for the uncompressed lasso carry over to the compressed lasso, when
X is drawn from a Gaussian ensemble. In general, the compression factor f is well below the
requirement that we have in Theorem 3.3 in case X is deterministic. In more detail, we consider the
Gaussian ensemble for the projection matrix 8, where 8i, j ? N (0, 1/n) are independent. The noise
is ? N (0, ? 2 ), where ? 2 = 1. We consider Gaussian ensembles for the design matrix X with both
diagonal and Toeplitz covariance. In the Toeplitz case, the covariance is given by T (?)i, j = ? |i? j| ;
we use ? = 0.1. [19] shows that when X comes from a Gaussian ensemble under these conditions,
there exist fixed constants ?` and ?u such that for any ? > 0 and s = supp(?), if
n > 2(?u + ?)s log( p ? s) + s + 1,
(8)
n < 2(?` ? ?)s log( p ? s) + s + 1,
(9)
then the lasso identifies true variables with probability approaching one. Conversely, if
then the probability of recovering the true variables using the lasso approaches zero. In the following simulations, we carry out the lasso using procedure lars(Y, X ) that implements the LARS
algorithm of [8] to calculate the full regularization path. For the uncompressed case, we run
lars(Y, X ) such that Y = X? ? + , and for the compressed p
case we run lars(8Y, 8X ) such
that 8Y = 8X? ? + 8. The regularization parameter is ?m = c (log( p ? s) log s)/m. The results
show that the behavior under compression is close to the uncompressed case.
e =
Persistence. Here we solve the following `1 -constrained optimization problem ?
arg min k?k1 ?L kY ? X?k2 directly, based on algorithms described by [15]. We constrain the solu?
tion to lie in the ball Bn = {k?k1 ? L n }, where L n = n 1/4 / log n. By [10], the uncompressed lasso
6
1.0
Identity; FP ?=0.5, ?=0.2; p=1024
0.8
1024
Uncomp.
f=5
f = 10
f = 20
f = 40
f = 80
f = 120
0.2
0.0
0.6
0.0
0.5
1.0
1.5
2.0
Control parameter ?
2.5
3.0
1.0
Toeplitz ?=0.1; FP ?=0.5, ?=0.2; p=1024
100
150
200
250
300
0.2
50
Uncomp.
f=5
f = 10
f = 20
f = 40
f = 80
f = 120
0.4
Prob of success
0.0
0
0.6
0.8
0.4
Uncompressed
f = 120
0.2
Prob of success
0.6
512
0.8
256
0.4
p=128
Prob of success
1.0
Toeplitz ?=0.1; Fractional Power ?=0.5, ?=0.2
0.0
Compressed dimension m
0.0
0.5
1.0
1.5
2.0
Control parameter ?
2.5
3.0
Figure 1: Plots of the number of samples versus the probability of success for recovering sgn(? ? ).
Each point on a curve for a particular ? or m, where m = 2? ? 2 s log( p ? s) + s + 1, is an average
over 200 trials; for each trial, we randomly draw X n? p , 8m?n , and ? Rn . The covariance 6 =
1
T
?
1/2 .
n E X X and model ? are fixed across all curves in the plot. The sparsity level is s( p) = 0.2 p
The four sets of curves in the left plot are for p = 128, 256, 512 and 1024, with dashed lines
marking m for ? = 1 and s = 2, 3, 5 and 6 respectively. In the plots on the right, each curve has
a compression factor f ? {5, 10, 20, 40, 80, 120} for the compressed lasso, thus n = f m; dashed
lines mark ? = 1. For 6 = I , ?u = ?` = 1, while for 6 = T (0.1), ?u ? 1.84 and ?` ? 0.46 [19],
for the uncompressed lasso in (8) and in (9).
18
n=9000, p=128, s=9
14
8
10
12
Risk
16
Uncompressed predictive
Compressed predictive
Compressed empirical
0
2000
4000
6000
8000
Compressed dimension m
Figure 2: Risk versus compressed dimension. We fix n = 9000 and p = 128, and set s( p) = 3 and
L n =
2.6874.
The model is ? ? = (?0.9, ?1.7, 1.1, 1.3, ?0.5, 2, ?1.7, ?1.3, ?0.9, 0, . . . , 0)T so
?
that ?b 1 > L n and ?b? 6? Bn , and the uncompressed oracle predictive risk is R = 9.81. For each
value of m, a data point corresponds to the mean empirical risk, which is defined in (7), over 100
trials, and each vertical bar shows one standard deviation. For each trial, we randomly draw X n? p
with i.i.d. row vectors xi ? N (0, T (0.1)), and Y = X? ? + .
7
bn is persistent over Bn . For the compressed lasso, given n and pn , and a varying comestimator ?
p
pressed sample size m, we take the ball Bn,m = {? : k?k1 ? L n,m } where L n,m = m 1/4 / log(npn ).
bn,m for log2 (npn ) ? m ? n, is persistent over Bn,m by TheoThe compressed lasso estimator ?
rem 4.1. The simulations confirm this behavior.
7
Acknowlegments
This work was supported in part by NSF grant CCF-0625879.
References
[1] D. Agrawal and C. C. Aggarwal. On the design and quantification of privacy preserving data mining
algorithms. In In Proceedings of the 20th Symposium on Principles of Database Systems, May 2001.
[2] E. Cand?s, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements.
Communications in Pure and Applied Mathematics, 59(8):1207?1223, August 2006.
[3] T. Dalenius. Towards a methodology for statistical disclosure control. Statistik Tidskrift, 15:429?444,
1977.
[4] D. Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289?1306, April 2006.
[5] M. Duarte, M. Davenport, M. Wakin, and R. Baraniuk. Sparse signal detection from incoherent projections.
In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2006.
[6] G. Duncan and R. Pearson. Enhancing access to microdata while protecting confidentiality: Prospects for
the future. Statistical Science, 6(3):219?232, August 1991.
[7] C. Dwork. Differential privacy. In 33rd International Colloquium on Automata, Languages and
Programming?ICALP 2006, pages 1?12, 2006.
[8] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407?
499, 2004.
[9] J. Feigenbaum, Y. Ishai, T. Malkin, K. Nissim, M. J. Strauss, and R. N. Wright. Secure multiparty computation of approximations. ACM Trans. Algorithms, 2(3):435?472, 2006.
[10] E. Greenshtein and Y. Ritov. Persistency in high dimensional linear predictor-selection and the virtue of
over-parametrization. Journal of Bernoulli, 10:971?988, 2004.
[11] J. Haupt, R. Castro, R. Nowak, G. Fudge, and A. Yeh. Compressive sampling for signal classification. In
Proc. Asilomar Conference on Signals, Systems, and Computers, October 2006.
[12] K. Liu, H. Kargupta, and J. Ryan. Random projection-based multiplicative data perturbation for privacy
preserving distributed data mining. IEEE Trans. on Knowl. and Data Engin., 18(1), Jan. 2006.
[13] T. L. Marzetta and B. M. Hochwald. Capacity of a mobile multiple-antenna communication link in
Rayleigh flat fading. IEEE Trans. Info. Theory, 45(1):139?157, January 1999.
[14] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data.
Technical Report 720, Department of Statistics, UC Berkeley, 2006.
[15] M. Osborne, B. Presnell, and B. Turlach. On the lasso and its dual. J. Comp. and Graph. Stat., 9(2):319?
337, 2000.
[16] A. P. Sanil, A. Karr, X. Lin, and J. P. Reiter. Privacy preserving regression modelling via distributed
computation. In Proceedings of Tenth ACM SIGKDD, 2004.
[17] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267?288,
1996.
[18] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information
Theory, 50(10):2231?2242, 2004.
[19] M. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity. Technical Report
709, Department of Statistics, UC Berkeley, May 2006.
[20] P. Zhao and B. Yu. On model selection consistency of lasso. J. Mach. Learn. Research, 7:2541?2567,
2007.
8
| 3280 |@word trial:4 private:1 version:2 briefly:2 compression:15 stronger:1 norm:2 turlach:1 heuristically:1 simulation:6 seek:1 bn:35 covariance:3 mention:1 pressed:1 carry:2 celebrated:1 liu:1 selecting:1 recovered:1 current:1 si:1 john:1 additive:2 designed:1 plot:5 fewer:1 parametrization:1 record:2 persistency:2 pointer:1 org:1 guard:1 c2:3 differential:1 symposium:1 persistent:7 shorthand:1 consists:1 introduce:2 manner:1 privacy:26 pairwise:1 shuheng:1 indeed:1 behavior:3 p1:1 cand:1 roughly:2 multi:1 rem:1 little:1 actual:1 becomes:1 project:1 moreover:1 notation:1 underlying:1 compressive:1 transformation:6 guarantee:3 berkeley:2 every:1 rm:2 k2:5 ser:1 control:3 underlie:1 omit:1 appear:1 grant:1 before:2 dropped:1 limit:1 mach:1 analyzing:1 establishing:3 incoherence:3 path:1 approximately:1 studied:3 meinshausen:1 conversely:1 challenging:1 confidentiality:1 practical:1 testing:1 block:1 implement:1 communicated:2 procedure:8 jan:1 area:1 empirical:3 revealing:1 projection:8 persistence:14 convenient:2 refers:1 cannot:1 close:3 undesirable:1 romberg:1 selection:3 risk:10 context:1 sparsistent:5 equivalent:1 deterministic:2 customer:1 lagrangian:1 maximizing:1 attention:1 automaton:1 formulate:1 recovery:4 pure:1 wasserman:1 estimator:17 variation:1 annals:1 suppose:5 programming:1 hypothesis:2 agreement:1 pa:1 element:2 roy:1 particularly:1 predicts:3 database:1 role:1 fly:1 calculate:1 ensures:1 connected:1 prospect:1 observes:1 environment:1 colloquium:1 leaking:1 solving:2 rewrite:1 predictive:6 completely:1 joint:1 k0:1 represented:1 describe:1 effective:2 outside:1 pearson:1 quite:1 larger:1 solve:1 say:1 reconstruct:1 compressed:68 ability:1 statistic:4 toeplitz:4 think:1 antenna:6 noisy:3 sequence:9 agrawal:1 reconstruction:1 relevant:6 validate:1 normalize:1 ky:8 requirement:1 illustrate:1 develop:1 derive:2 stat:1 measured:1 qt:1 received:1 p2:1 soc:1 recovering:7 come:4 implies:1 closely:1 correct:3 compromising:1 attribute:2 lars:4 sgn:9 xnp:3 larry:1 public:2 require:2 fix:1 generalization:1 proposition:1 ryan:1 solu:1 considered:3 wright:1 mapping:1 predict:4 algorithmic:1 proc:2 knowl:1 successfully:1 establishes:1 clearly:1 gaussian:8 feigenbaum:1 zhou:1 pn:3 shrinkage:1 varying:1 mobile:1 focus:3 rank:1 transmitter:2 indicates:1 bernoulli:1 modelling:1 secure:1 sigkdd:1 sense:2 duarte:1 inference:3 inaccurate:1 manipulating:1 transformed:4 going:1 reproduce:1 tao:1 provably:1 issue:2 arg:11 classification:2 dual:1 constrained:1 mutual:3 uc:2 aware:1 sampling:1 yu:2 uncompressed:19 future:1 np:10 report:3 randomly:2 manipulated:1 preserve:1 individual:1 sparsistency:7 n1:2 ab:1 detection:1 organization:2 highly:1 mining:3 dwork:1 accurate:1 nowak:1 incomplete:1 theoretical:4 instance:1 column:6 deviation:1 entry:6 predictor:5 johnson:1 kargupta:1 characterize:2 stored:1 ishai:1 st:1 density:1 international:1 together:1 again:1 davenport:1 conf:1 zhao:1 return:1 supp:7 summarized:1 includes:1 coefficient:4 int:1 satisfy:1 explicitly:1 piece:1 multiplicative:2 view:1 tion:1 analyze:1 sup:4 observing:1 decaying:1 masking:1 square:1 formed:2 ensemble:6 yield:1 identify:3 comp:1 researcher:1 history:1 whenever:1 definition:3 pp:1 johnsonlindenstrauss:1 e2:1 associated:2 proof:3 knowledge:1 fractional:1 efron:1 back:1 methodology:1 response:5 april:1 ritov:3 though:1 just:1 correlation:1 tropp:1 ei:1 propagation:1 defines:1 reveal:3 engin:1 k22:3 normalized:1 true:6 ccf:1 regularization:5 hence:1 nonzero:2 reiter:1 kak:1 theoretic:7 demonstrate:1 meaning:2 recently:1 exponentially:2 m1:2 mellon:1 measurement:4 refer:1 ai:1 rd:1 consistency:3 mathematics:1 language:1 had:1 stable:1 access:1 longer:2 recent:4 irrelevant:1 certain:4 success:5 yi:2 preserving:4 analyzes:2 additional:2 transmitted:1 impose:1 period:1 signal:9 dashed:2 recoverable:1 multiple:4 full:3 aggarwal:1 technical:3 faster:1 adapt:1 characterized:1 long:1 lin:1 prediction:1 variant:1 regression:19 enhancing:1 arxiv:1 eter:1 c1:3 addition:1 background:1 want:2 interval:2 resistence:1 grow:2 lafferty:1 revealed:2 enough:1 npn:4 hastie:1 approaching:2 lasso:37 identified:1 opposite:1 presnell:1 greed:1 speech:1 speaking:2 informally:1 amount:1 statist:1 http:1 exist:1 nsf:1 sign:5 per:2 tibshirani:2 carnegie:1 four:1 threshold:1 drawn:1 clarity:1 tenth:1 asymptotically:2 graph:1 run:4 prob:3 angle:1 baraniuk:1 place:1 multiparty:1 draw:3 coherence:1 duncan:1 comparable:1 bound:9 oracle:5 fading:2 constraint:1 constrain:1 flat:2 statistik:1 argument:2 extremely:1 min:14 relatively:1 department:5 marking:1 ball:2 remain:1 across:1 increasingly:1 em:5 intimately:1 reconstructing:1 castro:1 intuitively:1 restricted:1 taken:1 asilomar:1 ln:1 resource:1 nonempty:1 needed:1 disclosure:1 available:3 apply:2 observe:1 appropriate:1 original:12 compress:1 assumes:1 denotes:1 include:1 log2:4 wakin:1 exploit:1 giving:1 k1:18 establish:1 quantity:1 primary:2 concentration:1 diagonal:1 subspace:1 distance:1 separate:1 link:1 capacity:5 nissim:1 argue:1 considers:1 trivial:1 assuming:1 length:1 modeled:3 mini:1 equivalently:1 october:1 info:2 design:4 cryptographic:1 unknown:1 upper:2 vertical:1 observation:1 protecting:1 january:1 communication:6 rn:14 perturbation:1 confidential:2 sharp:1 august:2 community:1 cast:1 required:2 pair:2 specified:1 connection:2 greenshtein:3 acoustic:1 learned:1 trans:4 bar:1 below:1 pattern:2 appeared:1 sparsity:6 fp:2 wainwright:2 power:2 event:1 natural:1 quantification:2 regularized:3 mn:4 identifies:1 carried:1 incoherent:3 review:3 literature:4 yeh:1 asymptotic:2 haupt:1 icalp:1 facing:1 versus:2 affine:2 consistent:4 thresholding:1 principle:1 storing:1 pi:1 row:3 supported:1 wireless:3 allow:2 weaker:2 raleigh:1 johnstone:1 karr:1 sparse:18 distributed:2 curve:4 dimension:5 lindenstrauss:1 acknowlegments:1 author:2 made:2 coincide:1 transaction:1 reconstructed:1 implicitly:1 supremum:1 confirm:1 active:1 receiver:2 pittsburgh:1 assumed:1 xi:1 thep:1 subsequence:2 reviewed:1 channel:8 learn:1 protocol:1 main:2 motivation:3 noise:7 bounding:3 osborne:1 allowed:1 en:2 lie:1 third:1 theorem:8 sensing:8 maxi:1 decay:1 symbol:2 virtue:1 concern:1 exists:2 strauss:1 effectively:1 rayleigh:1 simply:1 contained:1 corresponds:1 satisfies:7 acm:2 viewed:2 goal:2 identity:1 donoho:1 towards:1 replace:1 included:2 determined:2 reducing:1 lemma:2 called:3 experimental:1 shannon:1 select:2 formally:1 mark:1 latter:1 evaluate:1 |
2,515 | 3,281 | The Infinite Markov Model
Daichi Mochihashi ?
NTT Communication Science Laboratories
Hikaridai 2-4, Keihanna Science City
Kyoto, Japan 619-0237
daichi@cslab.kecl.ntt.co.jp
Eiichiro Sumita
ATR / NICT
Hikaridai 2-2, Keihanna Science City
Kyoto, Japan 619-0288
eiichiro.sumita@atr.jp
Abstract
We present a nonparametric Bayesian method of estimating variable order Markov
processes up to a theoretically infinite order. By extending a stick-breaking prior,
which is usually defined on a unit interval, ?vertically? to the trees of infinite depth
associated with a hierarchical Chinese restaurant process, our model directly infers
the hidden orders of Markov dependencies from which each symbol originated.
Experiments on character and word sequences in natural language showed that
the model has a comparative performance with an exponentially large full-order
model, while computationally much efficient in both time and space. We expect
that this basic model will also extend to the variable order hierarchical clustering
of general data.
1
Introduction
Since the pioneering work of Shannon [1], Markov models have not only been taught in elementary
information theory classes, but also served as indispensable tools and building blocks for sequence
modeling in many fields, including natural language processing, bioinformatics [2], and compression [3]. In particular, (n?1)th order Markov models over words are called ?n-gram? language
models and play a key role in speech recognition and machine translation, as regards choosing the
most natural sentence among candidate transcriptions [4].
Despite its mathematical simplicity, an inherent problem with a Markov model is that we must
determine its order. Because higher-order Markov models have an exponentially large number of
parameters, their orders have been restricted to a small, often fixed number. In fact, for ?n-gram?
models the assumed word dependency n is usually set at from three to five due to the high dimensionality of the lexicon. However, word dependencies will often have a span of greater than n for
phrasal expressions or compound proper nouns, or a much shorter n will suffice for some grammatical relationships. Similarly, DNA or amino acid sequences might have originated from multiple
temporal scales that are unknown to us.
To alleviate this problem, many ?variable-order? Markov models have been proposed [2, 5, 6, 7].
However, all stemming from [5] and [7], they are based on pruning a huge candidate suffix tree by
employing such criteria as KL-divergences. This kind of ?post-hoc? approach suffers from several
important limitations: First, when we want to consider deeper dependences, the candidate tree to be
pruned will be extremely large. This is especially prohibitive when the lexicon size is large as with
language. Second, the criteria and threshold for pruning the tree are inherently exogeneous and must
be set carefully so that they match the desired model and current data. Third, pruning by empirical
counts in advance, which is often used to build ?arbitrary order? candidate trees in these approaches,
is shown to behave very badly [8] and has no theoretical standpoints.
In contrast, in this paper we propose a complete generative model of variable-order Markov processes up to a theoretically infinite order. By extending a stick-breaking prior, which is usually
?
This research was conducted while the first author was affiliated with ATR/NICT.
?
Depth 0
1
sing like ?will?
?she will?
2
cry
sing
?
Depth 0
?and?
?of?
1
sing like
?she will?
?he will?
like
?states of?
america
=customer
?bread and? 2
butter
?will? ?of?
?order of?
?and?
?bread and?
?states of?
cry
?united states of? butter
= customer
=proxy customer
= proxy customer
?the united states of?
america
(a) Suffix Tree representation of the hierarchical Chinese (b) Infinite suffix tree of the proposed model.
Restaurant process on a second-order Markov model. Each Deploying customers at suitable depths, i.e.
count is a customer in this suffix tree.
Markov orders, is our inference problem.
Figure 1: Hierarchical Chinese restaurant processes over the finite and infinite suffix trees.
defined on a unit interval, ?vertically? to the trees of infinite depth associated with a hierarchical
Chinese restaurant process, our model directly infers the hidden orders of Markov dependencies
from which each symbol originated. We show this is possible with a small change to the inference
of the hiearchical Pitman-Yor process in discrete cases, and actually makes it more efficient in both
computational time and space. Furthermore, we extend the variable model by latent topics to show
that we can induce the variable length ?stochastic phrases? for topic by topic.
2
Suffix Trees on Hierarchical Chinese Restaurant Processes
The main obstacle that has prevented consistent approaches to variable order Markov models is
the lack of a hierarchical generative model of Markov processes that allows estimating increasingly sparse distributions as its order gets larger. However, now we have the hierarchical (Poisson-)
Dirichlet process that can be used as a fixed order language model [9][10], it is natural for us to
extend these models to variable orders also by using a nonparametric approach. While we concentrate here on discrete distributions, the same basic approach can be applied to a Markov process on
continuous distributions, such as Gaussians that inherit their means from their parent distributions.
For concreteness below we use a language model example, but the same model can be applied to
any discrete sequences, such as characters, DNAs, or even binary streams for compression.
Consider a trigram language model, which is a second-order Markov model over words often employed in speech recognition. Following [9], this Markov model can be represented by a suffix tree
of depth two, as shown in Figure 1(a). When we predict a word ?sing? after a context ?she will?,
we descend this suffix tree from the root (which corresponds to null string context), using the context backwards to follow a branch ?will? and then ?she will?.1 Now we arrive at the leaf node that
represents the context, and we can predict ?sing? by using the count distribution at this node.
During the learning phase, we begin with a suffix tree that has no counts. For every time a three word
sequence appears in the training data, such as ?she will sing? mentioned above, we add a count of a
final word (?sing?) given the context (?she will?) to the context node in the suffix tree. In fact this
corresponds to a hierarchical Chinese restaurant process, where each context node is a restaurant
and each count is a customer associated with a word. Here each node, i.e. restaurant, might not
have customers for all the words in the lexicon. Therefore, when a customer arrives at a node and
stochastically needs a new table to sit down, a copy of him, namely a proxy customer, is sent to its
parent node. When a node has no customer to compute the probability of some word, it uses the
distribution of customers at the parent node and appropriately interpolates it to sum to 1.
Assume that the node ?she will? does not have a customer of ?like.? We can nevertheless compute
the probability of ?like? given ?she will? if its sibling ?he will? has a customer ?like?. Because that
sibling has sent a copy of the customer to the common parent ?will?, the probability is computed by
appropriately interpolating the trigram probability given ?she will?, which is zero, with the bigram
probability given ?will?, which is not zero at the parent node.
1
This is the leftmost path in Figure 1(a). When there is no corresponding branch, we will create it.
1 ? qi
1 ? qj
1 ? qk
i
j
k
Figure 2: Probabilistic suffix tree of an infinite depth. (1 ? qi ) is a ?penetration probability? of a
descending customer at each node i, defining a stick-breaking process over the infinite tree.
Consequently, in the hierarchical Pitman-Yor language model (HPYLM), the predictive probability
of a symbol s = st in context h = st?n ? ? ? st?1 is recursively computed by
p(s|h) =
c(s|h)?d?ths
?+d?th?
+
p(s|h? ),
?+c(h)
?+c(h)
(1)
where h? = st?n+1 ? ? ? st?1 is a shortened
context with the farthest symbol dropped. c(s|h) is the
P
count of s at node h, and c(h) = s c(s|h) is the total count at node h. ths is the number of times
?
symbol s is estimated
Pto be generated from its parent distribution p(s|h ) rather than p(s|h) in the
training data: th? = s ths is its total. ? and d are the parameters of the Pitman-Yor process, and
can be estimated through the distribution of customers on a suffix tree by Gamma and Beta posterior
distributions, respectively. For details, see [9].
Although this Bayesian Markov model is very principled and attractive, we can see from Figure 1(a)
that all the real customers (i.e., counts) are fixed at the depth (n?1) in the suffix tree. Because actual
sequences will have heterogeneous Markov dependencies, we want a Markov model that deploys
customers at different levels in the suffix tree according to the true Markov order from which each
customer originated. But how can we model such a heterogeneous property of Markov sequences?
3
Infinite-order Hierarchical Chinese Restaurant Processes
Intuitively, we know that suffix trees that are too deep are improbable and symbol dependencies
decay largely exponentially with context lengths. However, some customers may reside in a very
deep node (for example, ?the united states of america?) and some in a shallow node (?shorter than?).
Our model for deploying customers must be flexible enough to accommodate all these possibilities.
3.1
Introducing Suffix Tree Prior
For this purpose, we assume that each node i in the suffix tree has a hidden probability qi of stopping
at node i when following a path from the root of the tree to add a customer. In other words, (1 ? qi )
is the ?penetration probability? when descending an infinite depth suffix tree from its root (Figure 2).
We assume that each qi is generated from a prior Beta distribution independently as:
qi ? Be(?, ?)
i.i.d.
(2)
This choice is mainly for simplicity: however, later we will show that the final predictive performance does not significantly depend on ? or ?.
When we want to generate a symbol st given a context h = s?? ? ? ? st?2 st?1 , we descend the suffix
tree from the root following a path st?1 ? st?2 ? ? ? ? , according to the probability of stopping at a
l?1
level l given by
Y
p(n = l|h) = ql
(1 ? qi ) .
(l = 0, 1, ? ? ? , ?)
(3)
i=0
When we stop at level l, we generate a symbol st using the context st?l ? ? ?st?2 st?1 . Since qi differs
from node to node, we may reach very deep nodes with high probability if the qi ?s along the path
are equally small (the ?penetration? of this branch is high); or, we may stop at a very shallow node
if the qi ?s are very high (the ?penetration? is low). In general, the probability to reach a node decays
exponentially with levels according to (3), but the degrees are different to allow for long sequences
of typical phrases.
Note that even for the same context h, the context length that was used to generate the next symbol
may differ stochastically for each appearance according to (3).
3.2
Inference
Of course, we do not know the hidden probability qi possessed by each node. Then, how can
we estimate it? Note that the generative model above amounts to introducing a vector of hidden
variables, n = n1 n2 ? ? ? nT , that corresponds to each Markov order (n = 0 ? ? ? ?) from which each
symbol st in s = s1 s2 ? ? ? sT originated. Therefore, we can write the probability of s as follows:
XX
p(s) =
p(s, z, n) .
(4)
n
z
Here, z = z1 z2 ? ? ? zT is a vector that represents the hidden seatings of the proxy customers described
in Section 2, where 0 ? zt ? nt means how recursively the st ?s proxy customers are stochastically
sent to parent nodes. To estimate these hidden variables n and z, we use a Gibbs sampler as in [9].
Since in the hierarchical (Poisson-)Dirichlet process the customers are exchangeable [9] and qi is
i.i.d. as shown in (2), this process is also exchangeable and therefore we can always assume, by a
suitable permutation, that the customer to resample is the final customer.
In our case, we only explicitly resample nt given n?t (n excluding nt ), as follows:
nt ? p(nt |s, z?t , n?t ).
(5)
Notice here that when we sample nt , we already know the other depths n?t that other words have
reached in the suffix tree. Therefore, when computing (5) using (3), the expectation of each qi is
ai +?
E[qi ] =
,
(6)
ai +bi +?+?
where ai is the number of times node i was stopped at when generating other words, and bi is
the number of times node i was passed by. Using this estimate, we decompose the conditional
probability of (5) as
p(nt |s, z?t , n?t ) ? p(st |s?t , z?t , n) p(nt |s?t , z?t , n?t ) .
(7)
The first term is the probability of st under HPYLM when the Markov order is known to be nt ,
given by (1). The second term is the prior probability of reaching that node at depth nt . By using
(6) and (3), this probability is given by
l?1
Y
al +?
bi +?
p(nt = l|s?t , z?t , n?t ) =
.
(8)
al +bl +?+? i=0 ai +bi +?+?
Expression (7) is a tradeoff between these two terms: the prediction of st will be increasingly better
when the context length nt becomes long, but we can select it only when the probability of reaching
that level in the suffix tree is supported by the other counts in the training data.
Using these probabilities, we can construct a Gibbs sampler, as shown in Figure 3, to iteratively
resample n and z in order to estimate the parameter of the variable order hierarchical Pitman-Yor
language model (VPYLM)2 . In this sampler, we first remove the t?th customer who resides at a depth
of order[t] in the suffix tree, and decrement ai or bi accordingly along the path. Sampling a new
depth (i.e. Markov order) according to (7), we put the t?th customer back at the new depth recorded
as order[t], and increment ai or bi accordingly along the new path. When we add a customer st , zt
is implicitly sampled because st ?s proxy customer is recursively sent to parent nodes in case a new
table is needed to sit him down.
struct ngram {
/* n-gram node */
ngram *parent;
1: for j = 1 ? ? ? N do
splay *children; /* = (ngram **) */
2:
for t = randperm(1 ? ? ? T ) do
splay *symbols; /* = (restaurant **) */
3:
if j > 1 then
int stop;
/* ah */
int through;
/ * bh * /
4:
remove customer (order[t], st , s1:t?1 )
int ncounts;
/* c(h) */
5:
end if
int ntables;
/* th? */
6:
order[t] = add customer (st , s1:t?1 ) .
int id;
/* symbol id */
};
7:
end for
8: end for
Figure 4: Data structure of a suffix tree node.
Figure 3: Gibbs Sampler of VPYLM.
2
Counts ah and bh are maintained at each node. We
used Splay Trees for efficient insertion/deletion.
This is a specific application of our model to the hierarchical Pitman-Yor processes for discrete data.
?how queershaped little children drawling-desks, which would get through that dormouse!?
said alice; ?let us all for anything the secondly, but it to have and another question, but i
shalled out, ?you are old,? said the you?re trying to far out to sea.
(a) Random walk generation from a character model.
Character
s a i d a l i c e ; ? l e t u s a l l f o r any t h i ng t he s e cond l y , ? ? ?
Markov order 5 6 5 4 7 10 6 5 4 3 7 1 4 8 2 4 4 6 5 5 4 4 5 5 6 4 5 6 7 7 7 5 3 3 4 5 9 11 6 4 8 9 8 9 4 4 4 7 3 4 3 ? ? ?
(b) Markov orders used to generate each character above.
Figure 5: Character-based infinite Markov model trained on ?Alice in Wonderland.?
This sampler is an extension of that reported in [9] using stochastically different orders n (n =
0 ? ? ? ?) for each customer. In practice, we can place some maximum order nmax on n and sample
within it 3 , or use a small threshold ? to stop the descent when the prior probability (8) of reaching
that level is smaller than ?. In this case, we obtain an ?infinite? order Markov model: now we can
eliminate the order from Markov models by integrating it out.
Because each node in the suffix tree may have a huge number of children, we used Splay Trees [11]
for the efficient search as in [6]. Splay Trees are self-organizing binary search trees having amortized
O(log n) order, that automatically put frequent items at shallower nodes. This is ideal for sequences
with a power law property like natural languages. Figure 4 shows our data structure of a node in a
suffix tree.
3.3
Prediction
Since we do not usually know the Markov order of a context h = s?? ? ? ? s?2 s?1 beforehand, when
making predictions we consider n as a latent variable and average over it, as follows:
P?
p(s|h) = n=0 p(s, n|h)
(9)
P?
= n=0 p(s|h, n)p(n|h) .
(10)
Here, p(s|n, h) is a HPYLM prediction of order n through (1), and p(n|h) is the probability distribution of latent Markov order n possessed by the context h, obtained through (8). In practice, we
further average (10) over the configurations of n and s through N Gibbs iterations on training data
s, as HPYLM does.
Since p(n|h) has a product form as (3), we can also write the above expression recursively by
introducing an auxiliary probability p(s|h, n+ ) as follows:
p(s|h, n+ ) = qn ? p(s|h, n) + (1 ? qn ) ? p(s|h, (n+1)+ ) ,
(11)
+
p(s|h) ? p(s|h, 0 ) .
(12)
This formula shows that qn in fact defines the stick-breaking process on an infinite tree, where
breaking proportions will differ branch to branch as opposed to a single proportion on a unit interval
used in ordinary Dirichlet processes. In practice, we can truncate the infinite recursion in (11) and
rescale it to make p(n|h) a proper distribution.
3.4
?Stochastic Phrases? on Suffix Tree
In the expression (9) above, p(s, n|h) is the probability that the symbol s is generated by a Markov
process of order n on h, that is, using the last n symbols of h as a Markov state. This means that a
subsequence s?n ? ? ? s?1 s forms a ?phrase?: for example, when ?Gaussians? was generated using a
context ?mixture of?, we can consider ?mixture of Gaussians? as a phrase and assign a probability
to this subsequence, which represents its cohesion strength irrespective of its length.
In other words, instead of emitting a single symbol s at the root node of suffix tree, we can first
stochastically descend the tree according to the probability to stop by (3). Finally, we emit s given
the context s?n ? ? ? s?1 , which yields a phrase s?n ? ? ? s?1 s and its cohesion probability. Therefore,
by traversing the suffix tree, we can compute p(s, n|h) for all the subsequences efficiently. For
concrete examples, see Figure 8 and 10 in Section 4.
3
Notice that by setting (?, ?) = (0, ?), we always obtain qi = 0: with some maximum order nmax , this
is equivalent to always using the maximum depth, and thus to reducing the model to the original HPYLM. In
this regard, VPYLM is a natural superset that includes HPYLM [9].
while
key
european
consuming
nations
appear
unfazed
about
the
prospects
of
a
producer
cartel
that
will
attempt
to
fix
prices
|
the
pact
is
likely
to
meet
strong
opposition
from
u.s.
delegates
this
week
EOS
n
0 1 2 3 4 5 6 7 8 9
Figure 6: Estimated Markov order distributions from which each word has been generated.
4
Experiments
To investigate the behavior of the infinite Markov model, we conducted experiments on character
and word sequences in natural language.
4.1
Infinite character Markov model
Character-based Markov model is widely employed in data compression and has important application in language processing, such as OCR and unknown word recognition. In this experiment, we
used a 140,931 characters text of ?Alice in Wonderland? and built an infinite Markov model using
uniform Beta prior and truncation threshold ? = 0.0001 in Section 3.2.
Max. order Perplexity
Figure 5(a) is a random walk generation from this infinite model.
n=3
6.048
To generate this, we begin with an infinite sequence of ?beginning
n=5
3.803
of sentence? special symbols, and sample the next character accordn = 10
3.519
ing to the generative model given the already sampled sequence as
n=?
3.502
the context. Figure 5(b) is the actual Markov orders used for generation by (8). Without any notion of ?word?, we can see that our Table 1: Perplexity results of
model correctly captures it and even higher dependencies between Character models.
?words?. In fact, the model contained many nodes that correspond to valid words as well as the
connective fragments between them. Table 1 shows predictive perplexity4 results on separate test
data. Compared with truncations n = 3, 5 and 10, the infinite model performs the best in all the
variable order options.
Bayesian ?-gram Language Model
4.2
Data For a word-based ?n-gram? model of language, we used a random subset of the standard
NAB Wall Street Journal language modeling corpus [12] 5 , totalling 10,007,108 words (409,246
sentences) for training and 10,000 sentences for testing. Symbols that occurred fewer than 10 times
in total and punctuation (commas, quotation marks etc.) are mapped to special characters, and all
sentences are lowercased, yielding a lexicon of 26,497 words. As HPYLM is shown to converge
very fast [9], according to preliminary experiments we used N = 200 Gibbs iterations for burn-in,
and a further 50 iterations to evaluate the perplexity of the test data.
Results Figure 6 shows the Hinton diagram of estimated Markov order distributions on part of the
training data, computed according to (7). As for the perplexity, Table 2 shows the results compared
with the fixed-order HPYLM with the number of nodes in each model. n means the fixed order for
HPYLM, and the maximum order nmax in VPYLM. For the ?infinite? model of n = ?, we used a
threshold ? = 10?8 in Section 3.2 for descending the suffix tree.
As empirically found by [12], perplexities will saturate when n becomes large, because only a small
portion of words actually exhibit long-range dependencies. However, we can see that the VPYLM
performance is comparable to that of HPYLM with much fewer nodes and restaurants up to n = 7
and 8, where vanilla HPYLM encounters memory overflow caused by a rapid increase in the number
of parameters. In fact, the inference of VPYLM is about 20% faster than that of HPYLM of the
4
Perplexity is a reciprocal of average predictive probabilities, thus smaller is better.
We also conducted experiments on standard corpora of Chinese (character-wise) and Japanese, and obtained the same line of results presented in this paper.
5
3.5?106
HPYLM VPYLM Nodes(H) Nodes(V)
113.60
113.74
1,417K
1,344K
101.08
101.69
12,699K
7,466K
N/A
100.68
27,193K 10,182K
N/A
100.58
34,459K 10,434K
?
100.36
?
10,629K
Table 2: Perplexity Results of VPYLM and
HPYLM on the NAB corpus with the number
of nodes in each model. N/A means a memory
overflow caused by the expected number of nodes
shown in italic.
3.0?106
Occurrences
n
3
5
7
8
?
2.5?106
2.0?106
1.5?106
1.0?106
5.0?105
0.0?100
0 1 2 3 4 5 6 7 8 9 10 11 12
n
Figure 7: Global distribution of sampled
Markov orders on the ?-gram VPYLM over
the NAB corpus. n = 0 is unigram, n = 1 is
bigram,? ? ? .
same order despite the additional cost of sampling n-gram orders, because it appropriately avoids
the addition of unnecessarily deep nodes on the suffix tree. The perplexity at n = ? is the lowest
compared to all fixed truncations, and contains only necessary number of nodes in the model.
Figure 7 shows a global n-gram order distribution from a single posterior sample of Gibbs iteration in
?-gram VPYLM. Note that since we added an infinite number of dummy symbols to the sentence
heads as usual, every word context has a maximum possible length of ?. We can see from this
figure that the context lengths that were actually used decay largely exponentially, as intuitively
expected. Because of the tradeoff between using a longer, more predictive context and the penalty
incurred when reaching a deeper node, interestingly a peak emerges around n = 3 ? 4 as a global
phenomenon.
With regard to the hyperparameter that defines the prior forms of suffix trees, we used a (4, 1)prior in this experiment. In fact, this hyperparameter can be optimized by the empirical Bayes
method using each Beta posterior of qi in (6). By using the Newton-Raphson iteration of [13], this
converged to (0.85, 0.57) on a 1 million word subset of the NAB corpus. However, we can see that
the performance does not depend significantly on the prior. Figure 9 shows perplexity results for the
same data, using (?, ?) ? (0.1 ? 10)?(0.1 ? 10). We can see from this figure that the performance
is almost stable, except when ? is significantly greater than ?. Finally, we show in Figure 8 some
?stochastic pharases? in Section 3.4 induced on the NAB corpus.
4.3
Variable Order Topic Model
While previous approaches to latent topic modeling assumed a fixed order such as unigrams or
bigrams, the order is generally not fixed and unknown to us. Therefore, we used a Gibbs sampler
for the Markov chain LDA [14] and augmented it by sampling Markov orders at the same time.
Because ?topic-specific? sequences constitute only some part of the entire data, we assumed that the
?generic? model generated the document according to probability ?, and the rest are generated by
the LDA of VPYLM. We endow ? a uniform Beta prior and used the posterior estimate for sampling
that will differ document to document.
For the experiment, we used the NIPS papers dataset of 1739 documents. Among them, we used
random 1500 documents for training and random 50 documents from the rest of 239 documents
for testing, after the same preprocessing for the NAB corpus. We set a symmetric Dirichlet prior
p(s, n)
0.9784
0.9726
0.9512
0.9026
0.8896
0.8831
0.7566
0.7134
0.6617
:
Stochastic phrases in the suffix tree
primary new issues
? at the same time
is a unit of
from # % in # to # %
in a number of
in new york stock exchange composite trading
mechanism of the european monetary
increase as a result of
tiffany & co.
PPL
136
134
132
130
128
126
124
0.1
134
132
130
128
126
124
122
0.5 1
?
2
0.5
5 10
1
2
5
10
?
0.1
Figure 9: Perplexity results using different hyperparameters on the 1M NAB
corpus.
Figure 8: ?Stochastic phrases? induced by the 8-gram
VPYLM trained on the NAB corpus.
p(n, s) Phrase
0.9904 in section #
0.9900 the number of
0.9856 in order to
0.9832 in table #
0.9752 dealing with
0.9693 with respect to
(a) Topic 0 (?generic?)
p(n, s)
0.9853
0.9840
0.9630
0.9266
0.8939
0.8756
Phrase
et al
receptive field
excitatory and inhibitory
in order to
primary visual cortex
corresponds to
(b) Topic 1
p(n, s)
0.9823
0.9524
0.9081
0.8206
0.8044
0.7790
Phrase
monte carlo
associative memory
as can be seen
parzen windows
in the previous section
american institute of physics
(c) Topic 4
Figure 10: Topic based stochastic pharases.
? = 0.1 and the number of topics M = 5, nmax = 5 and ran a N = 200 Gibbs iterations to obtain
a single posterior set of models.
Although in predictive perplexity the improvements are slight (VPYLDA=116.62,
VPYLM=117.28), ?stochastic pharases? computed on each topic VPYLM show interesting
characteristics shown in Figure 10. Although we used a small number of latent topics in this
experiment to avoid data sparsenesses, in future research we need a more flexible topic model where
the number of latent topics will differ from node to node in the suffix tree.
5
Discussion and Conclusion
In this paper, we presented a completely generative approach to estimating variable order Markov
processes. By extending a stick-breaking process ?vertically? over a suffix tree of hierarchical Chinese restaurant processes, we can make a posterior inference on the Markov orders from which each
data originates.
Although our architecture looks similar to Polya Trees [15], in Polya Trees their recursive partitions
are independent while our stick-breakings are hierarchically organized according to the suffix tree.
In addition to apparent application of our approach to hierarchical continuous distributions like
Gaussians, we expect that the basic model can be used for the distribution of latent variables. Each
data is assigned to a deeper level just when needed, and resides not only in leaf nodes but also in the
intermediate nodes, by stochastically descending a clustering hierarchy from the root as described
in this paper.
References
[1] C. E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379?423,
623?656, 1948.
[2] Alberto Apostolico and Gill Bejerano. Optimal amnesic probabilistic automata, or, how to learn and
classify proteins in linear time and space. Journal of Computational Biology, 7:381?393, 2000.
[3] F.M.J. Willems, Y.M. Shtarkov, and T.J. Tjalkens. The Context-Tree Weighting Method: Basic Properties.
IEEE Trans. on Information Theory, 41:653?664, 1995.
[4] Frederick Jelinek. Statistical Methods for Speech Recognition. Language, Speech, and Communication
Series. MIT Press, 1998.
[5] Peter Buhlmann and Abraham J. Wyner. Variable Length Markov Chains. The Annals of Statistics,
27(2):480?513, 1999.
[6] Fernando Pereira, Yoram Singer, and Naftali Tishby. Beyond Word N-grams. In Proc. of the Third
Workshop on Very Large Corpora, pages 95?106, 1995.
[7] Dana Ron, Yoram Singer, and Naftali Tishby. The Power of Amnesia. In Advances in Neural Information
Processing Systems, volume 6, pages 176?183, 1994.
[8] Andreas Stolcke. Entropy-based Pruning of Backoff Language Models. In Proc. of DARPA Broadcast
News Transcription and Understanding Workshop, pages 270?274, 1998.
[9] Yee Whye Teh. A Bayesian Interpretation of Interpolated Kneser-Ney. Technical Report TRA2/06,
School of Computing, NUS, 2006.
[10] Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. Interpolating Between Types and Tokens by
Estimating Power-Law Generators. In NIPS 2005, 2005.
[11] Daniel Sleator and Robert Tarjan. Self-Adjusting Binary Search Trees. JACM, 32(3):652?686, 1985.
[12] Joshua T. Goodman. A Bit of Progress in Language Modeling, Extended Version. Technical Report
MSR?TR?2001?72, Microsoft Research, 2001.
[13] Thomas P. Minka. Estimating a Dirichlet distribution, 2000. http://research.microsoft.com/?minka/papers/
dirichlet/.
[14] Mark Girolami and Ata Kab?an. Simplicial Mixtures of Markov Chains: Distributed Modelling of Dynamic User Profiles. In NIPS 2003. 2003.
[15] R. Daniel Mauldin, William D. Sudderth, and S. C. Williams. Polya Trees and Random Distributions.
Annals of Statistics, 20(3):1203?1221, 1992.
| 3281 |@word msr:1 version:1 bigram:3 compression:3 proportion:2 tr:1 accommodate:1 recursively:4 configuration:1 contains:1 fragment:1 united:3 series:1 daniel:2 document:7 interestingly:1 bejerano:1 current:1 z2:1 nt:13 com:1 must:3 stemming:1 partition:1 remove:2 generative:5 prohibitive:1 leaf:2 item:1 fewer:2 accordingly:2 beginning:1 reciprocal:1 node:50 lexicon:4 ron:1 five:1 mathematical:2 along:3 shtarkov:1 beta:5 amnesia:1 theoretically:2 expected:2 rapid:1 behavior:1 automatically:1 actual:2 little:1 window:1 becomes:2 begin:2 estimating:5 xx:1 suffice:1 null:1 lowest:1 pto:1 kind:1 connective:1 string:1 temporal:1 every:2 nation:1 stick:6 exchangeable:2 unit:4 farthest:1 originates:1 appear:1 dropped:1 vertically:3 despite:2 shortened:1 id:2 meet:1 path:6 kneser:1 might:2 burn:1 alice:3 co:2 ngram:3 bi:6 range:1 testing:2 practice:3 block:1 recursive:1 differs:1 empirical:2 bell:1 significantly:3 composite:1 word:28 induce:1 integrating:1 griffith:1 nmax:4 protein:1 get:2 bh:2 put:2 context:24 yee:1 descending:4 equivalent:1 customer:35 williams:1 independently:1 automaton:1 tjalkens:1 simplicity:2 notion:1 increment:1 phrasal:1 hierarchy:1 play:1 annals:2 user:1 us:1 amortized:1 recognition:4 role:1 capture:1 descend:3 news:1 prospect:1 ran:1 mentioned:1 principled:1 deploys:1 insertion:1 dynamic:1 trained:2 depend:2 predictive:6 completely:1 darpa:1 stock:1 represented:1 america:3 fast:1 monte:1 choosing:1 eos:1 apparent:1 larger:1 widely:1 statistic:2 nab:8 exogeneous:1 final:3 associative:1 hoc:1 sequence:13 propose:1 product:1 frequent:1 monetary:1 organizing:1 parent:9 extending:3 sea:1 comparative:1 generating:1 rescale:1 school:1 polya:3 progress:1 strong:1 auxiliary:1 trading:1 girolami:1 differ:4 concentrate:1 stochastic:7 exchange:1 assign:1 fix:1 wall:1 alleviate:1 decompose:1 preliminary:1 elementary:1 secondly:1 extension:1 around:1 eiichiro:2 predict:2 week:1 trigram:2 cohesion:2 resample:3 purpose:1 proc:2 him:2 create:1 city:2 tool:1 mit:1 always:3 rather:1 reaching:4 avoid:1 endow:1 she:9 improvement:1 modelling:1 mainly:1 contrast:1 inference:5 suffix:35 stopping:2 eliminate:1 entire:1 mochihashi:1 hidden:7 issue:1 among:2 flexible:2 noun:1 special:2 field:2 construct:1 having:1 ng:1 sampling:4 biology:1 represents:3 unnecessarily:1 look:1 future:1 report:2 inherent:1 producer:1 gamma:1 sumita:2 divergence:1 phase:1 n1:1 microsoft:2 attempt:1 william:1 huge:2 possibility:1 investigate:1 mixture:3 arrives:1 punctuation:1 yielding:1 chain:3 beforehand:1 emit:1 necessary:1 improbable:1 shorter:2 traversing:1 tree:53 old:1 walk:2 desired:1 re:1 theoretical:1 stopped:1 classify:1 modeling:4 obstacle:1 phrase:11 ordinary:1 introducing:3 cost:1 subset:2 uniform:2 conducted:3 johnson:1 too:1 tishby:2 reported:1 dependency:8 st:24 peak:1 daichi:2 probabilistic:2 physic:1 parzen:1 concrete:1 recorded:1 opposed:1 broadcast:1 stochastically:6 american:1 japan:2 includes:1 int:5 explicitly:1 caused:2 stream:1 later:1 root:6 unigrams:1 reached:1 portion:1 bayes:1 option:1 acid:1 qk:1 largely:2 who:1 yield:1 efficiently:1 correspond:1 characteristic:1 simplicial:1 goldwater:1 bayesian:4 carlo:1 served:1 ah:2 converged:1 deploying:2 suffers:1 reach:2 minka:2 associated:3 stop:5 sampled:3 dataset:1 adjusting:1 emerges:1 infers:2 dimensionality:1 organized:1 carefully:1 actually:3 back:1 appears:1 higher:2 follow:1 furthermore:1 just:1 lack:1 defines:2 lda:2 building:1 kab:1 true:1 assigned:1 symmetric:1 laboratory:1 iteratively:1 attractive:1 during:1 self:2 maintained:1 naftali:2 anything:1 criterion:2 leftmost:1 trying:1 whye:1 complete:1 performs:1 wise:1 common:1 empirically:1 jp:2 exponentially:5 million:1 extend:3 he:3 occurred:1 slight:1 volume:1 wonderland:2 interpretation:1 gibbs:8 ai:6 vanilla:1 similarly:1 language:18 stable:1 longer:1 cortex:1 etc:1 add:4 posterior:6 showed:1 perplexity:11 indispensable:1 compound:1 binary:3 joshua:1 seen:1 greater:2 additional:1 gill:1 employed:2 determine:1 converge:1 fernando:1 branch:5 full:1 multiple:1 bread:2 kyoto:2 ntt:2 ing:1 match:1 faster:1 technical:3 long:3 raphson:1 alberto:1 post:1 prevented:1 equally:1 qi:16 prediction:4 basic:4 heterogeneous:2 expectation:1 poisson:2 iteration:6 addition:2 want:3 interval:3 diagram:1 sudderth:1 standpoint:1 appropriately:3 goodman:1 rest:2 induced:2 sent:4 delegate:1 backwards:1 ideal:1 intermediate:1 keihanna:2 enough:1 superset:1 stolcke:1 restaurant:12 architecture:1 andreas:1 sibling:2 tradeoff:2 qj:1 expression:4 passed:1 penalty:1 peter:1 speech:4 interpolates:1 york:1 constitute:1 deep:4 generally:1 backoff:1 amount:1 nonparametric:2 desk:1 dna:2 generate:5 http:1 inhibitory:1 notice:2 estimated:4 correctly:1 dummy:1 discrete:4 write:2 hyperparameter:2 ppl:1 taught:1 key:2 threshold:4 nevertheless:1 sharon:1 concreteness:1 sum:1 you:2 arrive:1 place:1 almost:1 comparable:1 bit:1 opposition:1 badly:1 strength:1 interpolated:1 span:1 extremely:1 pruned:1 cslab:1 according:10 truncate:1 smaller:2 increasingly:2 character:14 shallow:2 penetration:4 s1:3 making:1 intuitively:2 restricted:1 sleator:1 computationally:1 count:11 mechanism:1 needed:2 know:4 singer:2 end:3 gaussians:4 hierarchical:16 ocr:1 generic:2 occurrence:1 ney:1 encounter:1 struct:1 original:1 thomas:2 clustering:2 dirichlet:6 newton:1 yoram:2 chinese:9 hikaridai:2 especially:1 build:1 overflow:2 bl:1 already:2 question:1 added:1 receptive:1 primary:2 dependence:1 usual:1 italic:1 said:2 exhibit:1 separate:1 mapped:1 atr:3 street:1 topic:15 seating:1 length:8 relationship:1 ql:1 robert:1 proper:2 affiliated:1 unknown:3 zt:3 shallower:1 teh:1 apostolico:1 willems:1 amnesic:1 markov:48 sing:7 finite:1 possessed:2 behave:1 descent:1 defining:1 hinton:1 communication:3 excluding:1 head:1 extended:1 arbitrary:1 tarjan:1 buhlmann:1 namely:1 kl:1 sentence:6 z1:1 optimized:1 deletion:1 nu:1 nip:3 trans:1 frederick:1 beyond:1 usually:4 below:1 pioneering:1 built:1 including:1 max:1 memory:3 power:3 suitable:2 natural:7 recursion:1 pact:1 wyner:1 irrespective:1 text:1 prior:12 understanding:1 law:2 expect:2 permutation:1 generation:3 limitation:1 interesting:1 dana:1 generator:1 incurred:1 degree:1 proxy:6 consistent:1 kecl:1 cry:2 translation:1 ata:1 course:1 excitatory:1 totalling:1 supported:1 last:1 copy:2 truncation:3 token:1 allow:1 deeper:3 institute:1 jelinek:1 sparse:1 distributed:1 pitman:5 yor:5 regard:3 grammatical:1 depth:15 lowercased:1 gram:11 valid:1 resides:2 qn:3 avoids:1 author:1 reside:1 preprocessing:1 employing:1 far:1 emitting:1 pruning:4 cartel:1 implicitly:1 transcription:2 dealing:1 global:3 corpus:10 assumed:3 consuming:1 subsequence:3 continuous:2 latent:7 search:3 comma:1 table:7 learn:1 inherently:1 interpolating:2 european:2 japanese:1 inherit:1 main:1 hierarchically:1 decrement:1 s2:1 abraham:1 hyperparameters:1 profile:1 n2:1 tra2:1 child:3 amino:1 augmented:1 originated:5 pereira:1 candidate:4 breaking:7 third:2 weighting:1 down:2 formula:1 saturate:1 specific:2 unigram:1 mauldin:1 symbol:18 decay:3 sit:2 workshop:2 sparseness:1 entropy:1 appearance:1 likely:1 jacm:1 visual:1 contained:1 corresponds:4 conditional:1 consequently:1 price:1 butter:2 change:1 infinite:23 typical:1 reducing:1 except:1 sampler:6 called:1 total:3 shannon:2 cond:1 select:1 mark:3 bioinformatics:1 evaluate:1 phenomenon:1 |
2,516 | 3,282 | Variational Inference for Diffusion Processes
Manfred Opper
Technical University Berlin
opperm@cs.tu-berlin.de
C?edric Archambeau
University College London
c.archambeau@cs.ucl.ac.uk
Yuan Shen
Aston University
y.shen2@aston.ac.uk
Dan Cornford
Aston University
d.cornford@aston.ac.uk
John Shawe-Taylor
University College London
jst@cs.ucl.ac.uk
Abstract
Diffusion processes are a family of continuous-time continuous-state stochastic
processes that are in general only partially observed. The joint estimation of the
forcing parameters and the system noise (volatility) in these dynamical systems is
a crucial, but non-trivial task, especially when the system is nonlinear and multimodal. We propose a variational treatment of diffusion processes, which allows
us to compute type II maximum likelihood estimates of the parameters by simple gradient techniques and which is computationally less demanding than most
MCMC approaches. We also show how a cheap estimate of the posterior over the
parameters can be constructed based on the variational free energy.
1
Introduction
Continuous-time diffusion processes, described by stochastic differential equations (SDEs), arise
naturally in a range of applications from environmental modelling to mathematical finance [13]. In
statistics the problem of Bayesian inference for both the state and parameters, within partially observed, non-linear diffusion processes has been tackled using Markov Chain Monte Carlo (MCMC)
approaches based on data augmentation [17, 11], Monte Carlo exact simulation methods [6], or
Langevin / hybrid Monte Carlo methods [1, 3]. Within the signal processing community solutions
to the so called Zakai equation [12] based on particle filters [8], a variety of extensions to the Kalman
filter/smoother [2, 5] and mean field analysis of the SDE together with moment closure methods [10]
have also been proposed. In this work we develop a novel variational approach to the problem of
approximate inference in continuous-time diffusion processes, including a marginal likelihood (evidence) based inference technique for the forcing parameters. In general, joint parameter and state
inference using naive methods is complicated due to dependencies between state and system noise
parameters.
We work in continuous time, computing distributions over sample paths1 , and discretise only in
our posterior approximation, which has advantages over methods based on discretising the SDE
directly [3]. The approximate inference approach we describe is more computationally efficient than
competing Monte Carlo algorithms and could be further improved in speed by defining a variety
of sub-optimal approximations. The approximation is also more accurate than existing Kalman
smoothing methods applied to non-linear systems [4]. Ultimately, we are motivated by the critical
requirement to estimate parameters within large environmental models, where at present only a small
number of Kalman filter/smoother based estimation algorithms have been attempted [2], and there
have been no likelihood based attempts to estimate the system noise forcing parameters.
1
A sample path is a continuous-time realisation of a stochatic process in a certain time interval. Hence, a
sample path is an infinite dimensional object.
1
In Section 2 and 3, we introduce the formalism for a variational treatment of partially observed diffusion processes with measurement noise and we provide the tools to estimate the optimal variational
posterior process [4]. Section 4 deals with the estimation of the drift and the system noise parameter, as well as the estimation of the optimal initial conditions. Finally, the approach is validated on a
bi-stable nonlinear system in Section 5. In this context, we also discuss how to construct an estimate
of the posterior distribution over parameters based on the variational free energy.
2
Diffusion processes with measurement error
Consider the continuous-time continuous-state stochastic process X = {Xt , t0 ? t ? tf }. We
assume this process is a d-dimensional diffusion process. Its time evolution is described by the
following SDE (to be interpreted as an Ito stochastic integral):
dXt = f? (t, Xt ) dt + ?1/2 dWt ,
dWt ? N (0, dtI).
(1)
The nonlinear vector function f? defines the deterministic drift and the positive semi-definite matrix
? ? Rd?d is the system noise covariance. The diffusion is modelled by a d-dimensional Wiener
process W = {Wt , t0 ? t ? tf } (see e.g. [13] for a formal definition). Eq. (1) defines a process
with additive system noise. This might seem restrictive at first sight. However, it can be shown
[13, 17, 6] that a range of state dependent stochastic forcings can be transformed into this form.
It is further assumed that only a small number of discrete-time latent states are observed and that the
observations are subject to measurement error. We denote the set of observations at the discrete times
N
N
{tn }N
n=1 by Y = {yn }n=1 and the corresponding latent states by {xn }n=1 , with xn = Xt=tn .For
simplicity, the measurement noise is modelled by a zero-mean multivariate Gaussian density,with
covariance matrix R ? Rd?d .
3
Approximate inference for diffusion processes
Our approximate inference scheme builds on [4] and is based on a variational inference approach
(see for example [7]). The aim is to minimise the variational free energy, which is defined as follows:
p(Y, X|?, ?)
F? (q, ?) = ? ln
, X = {Xt , t0 ? t ? tf },
(2)
q(X|?)
q
where q(X|?) is an approximate posterior process over sample paths in the interval [t0 , tf ] and ?
are the parameters, excluding the stochastic forcing covariance matrix ?. Hence, this quantity is an
upper bound to the negative log-marginal likelihood:
? ln p(Y |?, ?) = F? (q, ?) ? KL [q(X|?)kp(X|Y, ?, ?)] ? F? (q, ?).
(3)
As noted in Appendix A, this bound is finite if the approximate process is another diffusion process
with a system noise covariance chosen to be identical to that of the prior process induced by (1).
The standard approach for learning the parameters in presence of latent variables is to use an EM
type algorithm [9]. However, since the variational distribution is restricted to have the same system
noise covariance (see Appendix A) as the true posterior, the EM algorithm would leave this covariance completely unchanged in the M step and cannot be used for learning this crucial parameter.
Therefore, we adopt a different approach, which is based on a conjugate gradient method.
3.1
Optimal approximate posterior process
We consider an approximate time-varying linear process with the same diffusion term, that is the
same system noise covariance:
dXt = g(t, Xt ) dt + ?1/2 dWt ,
dWt ? N (0, dtI),
(4)
where g(t, x) = ?A(t)x+b(t), with A(t) ? Rd?d and b(t) ? Rd . In other words, the approximate
posterior process q(X|?) is restricted to be a Gaussian process [4]. The Gaussian marginal at time
t is defined as follows:
q(Xt |?) = N (Xt |m(t), S(t)),
2
t0 ? t ? tf ,
(5)
where m(t) ? Rd and S(t) ? Rd?d are respectively the marginal mean and the marginal covariance
at time t. In the rest of the paper, we denote q(Xt |?) by the shorthand notation qt .
For fixed parameters ? and assuming that there is no observation at the initial time t0 , the optimal
approximate posterior process q(X|?) is the one minimizing the variational free energy, which is
given by (see Appendix A)
Z tf
Z tf
X
F? (q, ?) =
Esde (t) dt +
Eobs (t)
?(t ? tn ) dt + KL [q0 kp0 ] .
(6)
t0
t0
n
The function ?(t) is Dirac?s delta function. The energy functions Esde (t) and Eobs (t) are defined as
follows:
1
Esde (t) =
(f? (t, Xt ) ? g(t, Xt ))> ??1 (f? (t, Xt ) ? g(t, Xt )) q ,
(7)
t
2
1
d
1
Eobs (t) =
(Yt ? Xt )> R?1 (Yt ? Xt ) qt + ln 2? + ln |R|.
(8)
2
2
2
where {Yt , t0 ? t ? tf } is the underlying continuous-time observable process.
3.2
Smoothing algorithm
The variational parameters to optimise in order to find the optimal Gaussian process approximation
are A(t), b(t), m(t) and S(t). For a linear SDE with additive system noise, it can be shown that
the time evolution of the means and the covariances are described by a set of ordinary differential
equations [13, 4]:
?
m(t)
= ?A(t)m(t) + b(t),
(9)
?
S(t)
= ?A(t)S(t) ? S(t)A> (t) + ?,
(10)
where ? denotes the time derivtive. These equations provide us with consistency constraints for the
marginal means and the marginal covariances along sample paths. To enforce these constraints we
formulate the Lagrangian
Z tf
?
L?,? = F? (q, ?) ?
?> (t) m(t)
+ A(t)m(t) ? b(t) dt
t0
Z
tf
?
n
o
?
tr ?(t) S(t)
+ 2A(t)S(t) ? ?
dt,
t0
d?d
where ?(t) ? Rd and ?(t) ? R
(11)
are time dependent Lagrange multipliers, with ?(t) symmetric.
First, taking the functional derivatives of L?,? with respect to A(t) and b(t) results in the following
gradient functions:
?A L?,? (t) = ?A Esde (t) ? ?(t)m> (t) ? 2?(t)S(t),
?b L?,? (t) = ?b Esde (t) + ?(t).
The gradients ?A Esde (t) and ?b Esde (t) are derived in Appendix B.
(12)
(13)
Secondly, taking the functional derivatives of L?,? with respect to m(t) and S(t), setting to zero
and rearranging leads to a set of ordinary differential equations, which describe the time evolution
of the Lagrange multipliers, along with jump conditions when there are observations:
?
?(t)
= ??m Esde (t) + A> (t)?(t), ?+ = ?? ? ?m Eobs (t)|t=t ,
(14)
?
?(t)
= ??S Esde (t) + 2?(t)A(t),
n
?+
n
=
n
??
n
n
? ?S Eobs (t)|t=tn .
(15)
The optimal variational functions can be computed by means of a gradient descent technique, such
as the conjugate gradient (see e.g., [16]). The explicit gradients with respect to A(t) and b(t)
are given by (12) and (13). Since m(t), S(t), ?(t) and ?(t) are dependent on these parameters,
one needs also to take the corresponding implicit derivatives into account. However, these implicit
gradients vanish if the consistency constraints for the means (9) and the covariances (10), as well as
the ones for the Lagrange multipliers (14-15), are satisfied. One way to achieve this is to perform a
forward propagation of the means and the covariances, followed by a backward propagation of the
Lagrange multipliers, and then to take a gradient step. The resulting algorithm for computing the
optimal posterior q(X|?) over sample paths is detailed in Algorithm 1.
3
Algorithm 1 Compute the optimal q(X|?).
1: input(m0 , S0 , ?, ?, t0 , tf , ?t, ?)
2: K ? (tf ? t0 )/?t
3: initialise {Ak , bk }k?0
4: repeat
5:
for k = 0 to K ? 1 do
6:
mk+1 ? mk ? (Ak mk ? bk )?t
7:
Sk+1 ? Sk ? (Ak Sk + Sk A>
k ? ?)?t
8:
end for{forward propagation}
9:
for k = K to 1 do
10:
?k?1 ? ?k + (?m Esde |t=tk ? A>
k ?k )?t
11:
?k?1 ? ?k + (?S Esde |t=tk ? 2?k Ak )?t
12:
if observation at tk?1 then
13:
?k?1 ? ?k?1 + ?m Eobs |t=tk?1
14:
?k?1 ? ?k?1 + ?S Eobs |t=tk?1
15:
end if{jumps}
16:
end for{backward sweep (adjoint operation)}
17:
update {Ak , bk }k?0 using the gradient functions (12) and (13)
18: until minimum of L?,? is attained {optimisation loop}
19: return {Ak , bk , mk , Sk , ?k , ?k }k?0
4
Parameter estimation
The parameters to optimise include the parameters of the prior over the initial state, the drift function parameters and the system noise covariance. The estimation of the parameters related to the
observable process are not discussed in this work, although it is a straightforward extension.
The smoothing algorithm described in the previous section computes the optimal posterior process
by providing us with the stationary solution functions A(t) and b(t). Therefore, when subsequently
optimising the parameters we only need to compute their explicit derivatives; the implicit ones
vanish since ?A L?,? = 0 and ?b L?,? = 0. Before computing the gradients, we integrate (11) by
parts to make the boundary conditions explicit. This leads to
Z tf n
o
>
>
L?,? = F? (q, ?) ?
?> (t) A(t)m(t) ? b(t) ? ?? (t)m(t) dt ? ?>
f mf + ?0 m0
t0
Z
tf
?
n
o
?
tr ?(t) 2A(t)S(t) ? ? ? ?(t)S(t)
dt ? tr {?f Sf } + tr {?0 S0 } ,
(16)
t0
At the final time tf , there are no consistency constraints, that is ?f and ?f are both equal to zero.
4.1
Initial state
The initial variational posterior q(x0 ) is chosen equal to N (x0 |m0 , S0 ) to ensure that the approximate process is a Gaussian one. Taking the derivatives of (16) with respect to m0 and S0 results in
the following expressions:
1 ?1
?m0 L?,? = ?0 + ?0?1 (m0 ? ?0 ), ?S0 L?,? = ?0 +
? I ? S?1
,
(17)
0
2 0
where the prior p(x0 ) is assumed to be an isotropic Gaussian density with mean ?0 . Its variance ?0
is taken sufficiently large to give a broad prior.
4.2
Drift
The gradients for the drift function parameters ? f only depend on the total energy associated to the
SDE. Their general expression is given by
Z tf
??f L?,? =
??f Esde (t) dt,
(18)
t0
4
where ??f Esde (t) =
D
>
(f? (t, Xt ) ? g(t, Xt )) ??1 ??f f? (t, Xt )
E
qt
. Note that the observations
do play a role in this gradient as they enter through g(t, Xt ) and the expectation w.r.t. q(Xt |?).
4.3
System noise
Estimating the system noise covariance (or volatility) is essential as the system noise, together with
the drift function, determines the dynamics. In general, this parameter is difficult to estimate using
an MCMC approach because the efficiency is strongly dependent on the discrete approximation of
the SDE and most methods break down when the time step ?t gets too small [11, 6]. For example
in a Bayesian MCMC approach, which alternates between sampling paths and parameters, the latent
paths imputed between observations must have a system noise parameter which is arbitrarily close
to its previous value in order to be accepted by a Metropolis sampler. Hence, the algorithm becomes
extremely slow. Note, that for the same reason, a naive EM algorithm within our approach breaks
down. However, in our method, we can simply compute approximations to the marginal likelihood
and its gradient directly. In the next section, we will compare our results to a direct MCMC estimate
of the marginal likelihood which is a time consuming method.
The gradient of (16) with respect to ? is given by
Z tf
Z
?? L?,? =
?? Esde (t) dt +
t0
tf
?(t) dt,
(19)
t0
D
E
>
where ?? Esde (t) = ? 21 ??1 (f? (t, Xt ) ? g(t, Xt )) (f? (t, Xt ) ? g(t, Xt ))
??1 .
qt
5
Experimental validation on a bi-stable system
In order to validate the approach, we consider the 1 dimensional double-well system:
f? (t, x) = 4x ? ? x2 , ? > 0,
(20)
where f? (t, x) is the drift function. This dynamical system is highly nonlinear and its stationary
distribution is multi-modal. It has two stable states, one in x = ?? and one in x = +?. The system
is driven by the system noise, which makes it occasionally flip from one well to the other.
In the experiments, we set the drift parameter ? to 1, the system noise standard deviation ? to 0.5 and
the measurement error standard deviation r to 0.2. The time step for the variational approximation
is set to ?t = 0.01, which is identical to the time resolution used to generate the original sample
path. In this setting, the exit time from one of the wells is 4000 time units [15]. In other words, the
transition from one well to the other is highly unlikely in the window of roughly 8 time units that
we consider and where a transition occurs.
Figure 1(a) compares the variational solution to the outcomes of a hybrid MCMC simulation of the
posterior process using the true parameter values. The hybrid MCMC approach was proposed in
[1]. At each step of the sampling process, an entire sample path is generated. In order to keep the
acceptance of new paths sufficiently high, the basic MCMC algorithm is combined with ideas from
Molecular Dynamics, such that the MCMC sampler moves towards regions of high probability in the
state space. An important drawback of MCMC approaches is that it might be extremely difficult to
monitor their convergence and that they may require a very large number of samples before actually
converging. In particular, over 100, 000 sample paths were necessary to reach convergence in the
case of the double-well system.
The solution provided by the hybrid MCMC is here considered as the base line solution. One can observe that the variational solution underestimates the uncertainty (smaller error bars). Nevertheless,
the time of the transition is correctly located. Convergence of the smoothing algorithm was achieved
in approximately 180 conjugate gradient steps, each one involving a forward and backward sweep.
The optimal parameters and the optimal initial conditions for the variational solution are given by
?
? = 0.72, ??f = 0.85, m
? 0 = 0.88, s?0 = 0.45.
(21)
Convergence of the outer optimization loop is typically reached after less then 10 conjugate gradient
steps. While the estimated value for the drift parameter is within 15% percent from its true value,
5
2
1.6
1.5
1.4
1
1.2
x
0.5
1
0
0.8
!0.5
0.6
!1
0.4
!1.5
0.2
!2
0
0
1
2
3
4
t
5
6
7
8
0.3
(a)
0.4
0.5
0.6
!2
0.7
0.8
0.9
(b)
Figure 1: (a) Variational solution (solid) compared to the hybrid MCMC solution (dashed), using
the true parameter values. The curves denote the mean paths and the shaded regions are the twostandard deviation noise tubes. (b) Posterior of the system noise variance (diffusion term). The plain
curve and the dashed curve are respectively the approximations of the posterior shape based on the
variational free energy and MCMC.
the deviation of the system noise is worse. Deviations may be explained by the fact that the number
of observations is relatively small. Furthermore, we have chosen a sample path which contains
a transition between the two wells within a small time interval and is thus highly untypical with
respect to the prior distribution. This fact was experimentally assessed by estimating the parameters
on a sample path without transition, in a time window of the same size. In this case, we obtained
estimate roughly within 5% of the true parameter values: ?
? = 0.46 and ??f = 0.92. Finally, it turns
out that our estimate for ?
? is close to the one obtained from the MCMC approach as discussed next.
Posterior distribution over the parameters
Interestingly, minimizing the free energy F?2 for different values of ? provides us with much more
information than a single point estimate for the parameters [14]. Using a suitable prior over p(?),
we can approximate the posterior over the system noise variance via
p(? 2 |Y ) ? e?F?2 p(? 2 ),
(22)
where we take e?F?2 (at its minimum) as an approximation to the marginal likelihood of the observations p(Y |? 2 ). To illustrate this point, we assume a non-informative Gamma prior p(? 2 ) =
G(?, ?), with ? = 10?3 and ? = 10?3 . A comparison with preliminary MCMC estimates for
p(Y |? 2 ) for ? = 1 and a set of system noise variances indicates that the shape of our approximation
is a reasonable indicator of the shape of the posterior. Figure 1(b) shows that at least the mean and
the variance of the density come out fairly well.
6
Conclusion
We have presented a variational approach to the approximate inference of stochastic differential
equations from a finite set of noisy observations. So far, we have tested the method on a one dimensional bi-stable system only. Comparison with a Monte Carlo approach suggests that our method
can reproduce the posterior mean fairly well but underestimates the variance in the region of the
transition. Parameter estimates also agree well with the MC predictions.
In the future, we will extend our method in various directions. Although our approach is based on
a Gaussian approximation of the posterior process, we expect that one can improve on it and obtain
non-Gaussian predictions at least for various marginal posterior distributions, including that of the
latent variable Xt at a fixed time t. This should be possible by generalising our method for the
computation of a non-Gaussian shaped probability density for the system noise parameter using the
free energy. An important extension of our method will be to systems with many degrees of freedom.
6
We hope that the possibility of using simpler suboptimal parametrisations of the approximating
Gaussian process will allow us to obtain a tractable inference method that scales well to higher
dimensions.
Acknowledgments
This work has been funded by the EPSRC as part of the Variational Inference for Stochastic Dynamic
Environmental Models (VISDEM) project (EP/C005848/1).
A
The Kullback-Leibler divergence interpreted as a path integral
In this section, we show that the Kullback-Leibler divergence between the posterior process
p(Xt |Y, ?, ?) and its approximation q(X|?) can be interpreted as a path integral over time. It
is an average over all possible realisations, called sample paths, of the continuous-time (i.e., infinite
dimensional) random variable described by the SDE in the time interval under consideration.
Consider the Euler-Muryama discrete approximation (see for example [13]) of the SDE (1) and its
linear approximation (4):
?xk = fk ?t + ?1/2 ?wk ,
(23)
1/2
?xk = gk ?t + ? ?wk ,
(24)
where ?xk ? xk+1 ? xk and wk ? N (0, ?tI). The vectors fk and gk are shorthand notations
for f? (tk , xk ) and g(tk , xk ). Hence, the joint distributions of discrete sample paths {xk }k?0 for the
true process and its approximation follow from the Markov property:
Y
p(x0 , . . . , xK |?) = p(x0 )
N (xk+1 |xk + fk ?t, ??t),
(25)
k>0
q(x0 , . . . , xK |?) = q(x0 )
Y
N (xk+1 |xk + gk ?t, ??t),
(26)
k>0
where p(x0 ) is the prior on the intial state x0 and q(x0 ) is assumed to be Gaussian. Note thate we
do not restrict the variational posterior to factorise over the latent states.
The Kullback-Leibler divergence between the two discretized prior processes is given by
XZ
KL [qkp] = KL [q(x0 )kp(x0 )] ?
q(xk ) hln p(xk+1 |xk )iq(xk+1 |xk ) dxk
k>0
= KL [q(x0 )kp(x0 )] +
1 X
(fk ? gk )> ??1 (fk ? gk ) q(x ) ?t,
k
2
k>0
where we omitted the conditional dependency on ? for simplicity. The second term on the right
hand side is a sum in ?t. As a result, taking limits for ?t ? 0 leads to a proper Riemann integral,
which defines an integral over the average sample path:
Z
1 tf
KL [q(X|?)kp(X|?, ?)] = KL [q0 kp0 ] +
(ft ? gt )> ??1 (ft ? gt ) qt dt,
(27)
2 t0
where X = {Xt , t0 ? t ? tf } denotes the stochastic process in the interval [t0 , tf ]. The distribution qt = q(Xt |?) is the marginal at time t for a given system noise covariance ?.
It is important to realise that the KL between the induced prior process and its approximation is
finite because the system noise covariances are chosen to be identical. If this was not the case,
the normalizing constants of p(xk+1 |xk ) and q(xk+1 |xk ) would not cancel. This would result in
KL ? ? when ?t ? 0.
If we assume that the observations are i.i.d., it follows also that
X
F? (q, ?) = ?
hln p(yn |xn )iq(xn ) + KL [q(X|?)kp(X|?, ?)] .
n
Clearly, minimising this expression with respect to the variational parameters for a given system
noise ? and for a fixed parameter vector ? is equivalent to minimising the KL between the variational posterior q(X|?) and the true posterior p(X|Y, ?, ?), since the normalizing constant is
independent of sample paths.
7
B
The gradient functions
The general expressions for the gradients of Esde (t) with respect to the variational functions are
given by
n
o
(28)
?A Esde (t) = ??1 h?x f? (t, Xt )iqt + A(t) S(t) ? ?b Esde (t)m> (t),
n
o
(29)
?b Esde (t) = ??1 ? hf? (t, Xt )iqt ? A(t)m(t) + b(t) ,
D
E
>
where h?x f? (t, Xt )iqt S(t) = f? (t, Xt ) (Xt ? m(t))
is invoked in order to obtain (28).
qt
References
[1] F. J. Alexander, G. L. Eyink, and J. M. Restrepo. Accelerated Monte Carlo for optimal estimation of time
series. Journal of Statistical Physics, 119:1331?1345, 2005.
[2] J. D. Annan, J. C. Hargreaves, N. R. Edwards, and R. Marsh. Parameter estimation in an intermediate
complexity earth system model using an ensemble Kalman filter. Ocean Modelling, 8:135?154, 2005.
[3] A. Apte, M. Hairer, A. Stuart, and J. Voss. Sampling the posterior: An approach to non-Gaussian data
assimilation. Physica D, 230:50?64, 2007.
[4] C. Archambeau, D. Cornford, M. Opper, and J. Shawe-Taylor. Gaussian process approximation of
stochastic differential equations. Journal of Machine Learning Research: Workshop and Conference
Proceedings, 1:1?16, 2007.
[5] D. Barber. Expectation correction for smoothed inference in switching linear dynamical systems. Journal
of Machine Learning Research, 7:2515?2540, 2006.
[6] A. Beskos, O. Papaspiliopoulos, G. Roberts, and P. Fearnhead. Exact and computationally efficient
likelihood-based estimation for discretely observed diffusion processes (with discussion). Journal of
the Royal Statistical Society B, 68(3):333?382, 2006.
[7] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006.
[8] D. Crisan and T. Lyons. A particle approximation of the solution of the Kushner-Stratonovitch equation.
Probability Theory and Related Fields, 115(4):549?578, 1999.
[9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via EM
algorithm. Journal of the Royal Statistical Society B, 39(1):1?38, 1977.
[10] G. L. Eyink, J. L. Restrepo, and F. J. Alexander. A mean field approximation in data assimilation for
nonlinear dynamics. Physica D, 194:347?368, 2004.
[11] A. Golightly and D. J. Wilkinson. Bayesian inference for nonlinear multivariate diffusion models observed
with error. Computational Statistics and Data Analysis, 2007. Accepted.
[12] A. H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, New York, 1970.
[13] Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential Equations. Springer,
Berlin, 1999.
[14] H. Lappalainen and J. W. Miskin. Ensemble learning. In M. Girolami, editor, Advances in Independent
Component Analysis, pages 76?92. Springer-Verlag, 2000.
[15] R. N. Miller, M. Ghil, and F. Gauthiez. Advanced data assimilation in strongly nonlinear dynamical
systems. Journal of the Atmospheric Sciences, 51:1037?1056, 1994.
[16] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, 2000.
[17] G. Roberts and O. Stramer. On inference for partially observed non-linear diffusion models using the
Metropolis-Hastings algorithm. Biometirka, 88:603?621, 2001.
8
| 3282 |@word closure:1 simulation:2 covariance:16 tr:4 solid:1 edric:1 moment:1 initial:6 contains:1 series:1 interestingly:1 existing:1 must:1 john:1 numerical:2 additive:2 informative:1 shape:3 cheap:1 sdes:1 update:1 stationary:2 isotropic:1 xk:23 manfred:1 provides:1 simpler:1 mathematical:1 along:2 constructed:1 direct:1 differential:6 yuan:1 shorthand:2 dan:1 introduce:1 x0:14 roughly:2 xz:1 multi:1 voss:1 discretized:1 riemann:1 kp0:2 lyon:1 window:2 becomes:1 provided:1 estimating:2 notation:2 underlying:1 project:1 sde:8 interpreted:3 dti:2 ti:1 finance:1 uk:4 unit:2 yn:2 positive:1 before:2 limit:1 switching:1 ak:6 path:20 approximately:1 might:2 suggests:1 shaded:1 archambeau:3 range:2 bi:3 acknowledgment:1 ghil:1 iqt:3 definite:1 hairer:1 word:2 get:1 cannot:1 close:2 context:1 equivalent:1 deterministic:1 lagrangian:1 yt:3 straightforward:1 shen:1 formulate:1 simplicity:2 resolution:1 initialise:1 qkp:1 play:1 exact:2 recognition:1 located:1 observed:7 role:1 epsrc:1 ep:1 ft:2 cornford:3 region:3 dempster:1 complexity:1 wilkinson:1 dynamic:4 ultimately:1 depend:1 efficiency:1 exit:1 completely:1 multimodal:1 joint:3 various:2 describe:2 london:2 monte:6 kp:5 outcome:1 statistic:2 noisy:1 laird:1 final:1 advantage:1 ucl:2 propose:1 tu:1 loop:2 achieve:1 opperm:1 adjoint:1 validate:1 dirac:1 convergence:4 double:2 requirement:1 leave:1 object:1 volatility:2 tk:7 develop:1 ac:4 illustrate:1 iq:2 qt:7 eq:1 edward:1 c:3 come:1 girolami:1 direction:1 drawback:1 filter:4 stochastic:12 subsequently:1 jst:1 require:1 preliminary:1 secondly:1 extension:3 physica:2 correction:1 sufficiently:2 considered:1 wright:1 m0:6 adopt:1 omitted:1 earth:1 estimation:9 tf:21 tool:1 hope:1 clearly:1 gaussian:13 sight:1 aim:1 fearnhead:1 crisan:1 varying:1 validated:1 derived:1 modelling:2 likelihood:9 indicates:1 inference:15 dependent:4 unlikely:1 entire:1 typically:1 jazwinski:1 transformed:1 reproduce:1 smoothing:4 fairly:2 marginal:12 field:3 construct:1 equal:2 shaped:1 sampling:3 identical:3 optimising:1 broad:1 stuart:1 cancel:1 future:1 realisation:2 gamma:1 divergence:3 attempt:1 freedom:1 factorise:1 acceptance:1 highly:3 possibility:1 chain:1 accurate:1 integral:5 necessary:1 incomplete:1 taylor:2 mk:4 eckhard:1 formalism:1 ordinary:2 deviation:5 intial:1 euler:1 too:1 dependency:2 combined:1 density:4 physic:1 together:2 augmentation:1 satisfied:1 tube:1 worse:1 derivative:5 return:1 account:1 de:1 wk:3 break:2 reached:1 hf:1 lappalainen:1 complicated:1 wiener:1 variance:6 ensemble:2 miller:1 modelled:2 bayesian:3 mc:1 carlo:6 reach:1 definition:1 underestimate:2 energy:9 realise:1 naturally:1 associated:1 treatment:2 zakai:1 actually:1 attained:1 dt:12 higher:1 follow:1 modal:1 improved:1 strongly:2 furthermore:1 implicit:3 until:1 hand:1 hastings:1 christopher:1 nonlinear:7 propagation:3 defines:3 dxk:1 true:7 multiplier:4 evolution:3 hence:4 q0:2 symmetric:1 leibler:3 deal:1 noted:1 tn:4 percent:1 variational:26 consideration:1 novel:1 invoked:1 marsh:1 functional:2 discussed:2 extend:1 measurement:5 enter:1 rd:7 consistency:3 fk:5 stochatic:1 particle:2 shawe:2 funded:1 stable:4 gt:2 base:1 posterior:26 multivariate:2 driven:1 forcing:4 occasionally:1 certain:1 verlag:1 arbitrarily:1 jorge:1 minimum:2 dashed:2 signal:1 ii:1 smoother:2 semi:1 stephen:1 technical:1 academic:1 minimising:2 molecular:1 converging:1 involving:1 basic:1 prediction:2 optimisation:1 expectation:2 achieved:1 interval:5 crucial:2 rest:1 subject:1 induced:2 seem:1 presence:1 intermediate:1 variety:2 competing:1 suboptimal:1 restrict:1 idea:1 beskos:1 minimise:1 t0:21 motivated:1 expression:4 peter:1 york:2 detailed:1 imputed:1 generate:1 delta:1 estimated:1 correctly:1 discrete:5 kloeden:1 nevertheless:1 monitor:1 diffusion:17 backward:3 nocedal:1 sum:1 uncertainty:1 family:1 reasonable:1 appendix:4 bound:2 followed:1 tackled:1 discretely:1 constraint:4 x2:1 speed:1 extremely:2 relatively:1 alternate:1 conjugate:4 smaller:1 em:4 metropolis:2 explained:1 restricted:2 taken:1 ln:4 computationally:3 equation:9 agree:1 apte:1 discus:1 turn:1 flip:1 tractable:1 end:3 operation:1 observe:1 enforce:1 ocean:1 dwt:4 original:1 denotes:2 include:1 ensure:1 kushner:1 restrictive:1 especially:1 build:1 approximating:1 society:2 unchanged:1 sweep:2 move:1 quantity:1 occurs:1 gradient:19 berlin:3 outer:1 barber:1 trivial:1 reason:1 assuming:1 kalman:4 discretise:1 providing:1 minimizing:2 difficult:2 robert:2 gk:5 negative:1 proper:1 perform:1 upper:1 observation:11 markov:2 finite:3 descent:1 langevin:1 defining:1 excluding:1 smoothed:1 hln:2 community:1 drift:9 atmospheric:1 bk:4 kl:11 bar:1 dynamical:4 pattern:1 including:2 optimise:2 royal:2 critical:1 demanding:1 suitable:1 hybrid:5 indicator:1 advanced:1 scheme:1 improve:1 golightly:1 aston:4 naive:2 prior:10 expect:1 dxt:2 filtering:1 validation:1 integrate:1 degree:1 s0:5 rubin:1 editor:1 repeat:1 free:7 formal:1 allow:1 side:1 taking:4 boundary:1 opper:2 xn:4 transition:6 curve:3 plain:1 computes:1 dimension:1 forward:3 jump:2 far:1 approximate:13 observable:2 kullback:3 keep:1 generalising:1 assumed:3 consuming:1 continuous:10 latent:6 sk:5 rearranging:1 parametrisations:1 noise:28 arise:1 papaspiliopoulos:1 slow:1 assimilation:3 sub:1 explicit:3 sf:1 vanish:2 ito:1 untypical:1 down:2 xt:32 bishop:1 evidence:1 normalizing:2 essential:1 workshop:1 mf:1 platen:1 stramer:1 simply:1 lagrange:4 partially:4 springer:4 environmental:3 determines:1 conditional:1 towards:1 experimentally:1 infinite:2 miskin:1 wt:1 sampler:2 called:2 total:1 accepted:2 experimental:1 attempted:1 college:2 assessed:1 alexander:2 accelerated:1 mcmc:15 tested:1 |
2,517 | 3,283 | Ensemble Clustering using Semidefinite
Programming
Vikas Singh
Biostatistics and Medical Informatics
University of Wisconsin ? Madison
Lopamudra Mukherjee
Computer Science and Engineering
State University of New York at Buffalo
vsingh @ biostat.wisc.edu
lm37 @ cse.buffalo.edu
Jiming Peng
Industrial and Enterprise System Engineering
University of Illinois at Urbana-Champaign
Jinhui Xu
Computer Science and Engineering
State University of New York at Buffalo
pengj @ uiuc.edu
jinhui @ cse.buffalo.edu
Abstract
We consider the ensemble clustering problem where the task is to ?aggregate?
multiple clustering solutions into a single consolidated clustering that maximizes
the shared information among given clustering solutions. We obtain several new
results for this problem. First, we note that the notion of agreement under such
circumstances can be better captured using an agreement measure based on a 2D
string encoding rather than voting strategy based methods proposed in literature.
Using this generalization, we first derive a nonlinear optimization model to maximize the new agreement measure. We then show that our optimization problem
can be transformed into a strict 0-1 Semidefinite Program (SDP) via novel convexification techniques which can subsequently be relaxed to a polynomial time
solvable SDP. Our experiments indicate improvements not only in terms of the
proposed agreement measure but also the existing agreement measures based on
voting strategies. We discuss evaluations on clustering and image segmentation
databases.
1
Introduction
In the so-called Ensemble Clustering problem, the target is to ?combine? multiple clustering solutions or partitions of a set into a single consolidated clustering that maximizes the information shared
(or ?agreement?) among all available clustering solutions. The need for this form of clustering arises
in many applications, especially real world scenarios with a high degree of uncertainty such as image
segmentation with poor signal to noise ratio and computer assisted disease diagnosis. It is quite common that a single clustering algorithm may not yield satisfactory results, while multiple algorithms
may individually make imperfect choices, assigning some elements to wrong clusters. Usually, by
considering the results of several different clustering algorithms together, one may be able to mitigate degeneracies in individual solutions and consequently obtain better solutions. The idea has been
employed successfully for microarray data classification analysis [1], computer assisted diagnosis
of diseases [2] and in a number of other applications [3].
Formally, given a data set D = {d1 , d2 , . . . , dn }, a set of clustering solutions C =
{C1 , C2 , . . . , Cm } obtained from m different clustering algorithms is called a cluster ensemble.
Each solution, Ci , is the partition of the data into at most k different clusters. The Ensemble Clustering problem requires one to use the individual solutions in C to partition D into k clusters such
that information shared (?agreement?) among the solutions of C is maximized.
1.1
Previous works
The Ensemble Clustering problem was recently introduced by Strehl and Ghosh [3], in [4] a related
notion of correlation clustering was independently proposed by Bansal, Blum, and Chawla. The
problem has attracted a fair amount of attention and a number of interesting techniques have been
proposed [3, 2, 5, 6], also see [7, 4]. Formulations primarily differ in how the objective of shared
information maximization or agreement is chosen, we review some of the popular techniques next.
The Instance Based Graph Formulation (IBGF) [2, 5] first constructs a fully connected graph G =
(V, W ) for the ensemble C = (C1 , . . . , Cm ), each node represents an element of D = {d1 , . . . , dn }.
The edge weight wij for the pair (di , dj ) is defined as the number of algorithms in C that assign
the nodes di and dj to the same cluster (i.e., wij measures the togetherness frequency of di and
dj ). Then, standard graph partitioning techniques are used to obtain a final clustering solution.
In Cluster Based Graph Formulation (CBGF), a given cluster ensemble is represented as C =
{C11 , . . . , Cmk } = {C?1 , . . . , C?m?k } where Cij denotes the ith cluster of the jth algorithm in C.
Like IBGF, this approach also constructs a graph, G = (V, W ), to model the correspondence (or
?similarity?) relationship among the mk clusters, where the similarity matrix W reflects the Jaccard?s
similarity measure between clusters C?i and C?j . The graph is then partitioned so that the clusters of
the same group are similar to one another. Variants of the problem have also received considerable
attention in the theoretical computer science and machine learning communities. A recent paper
by Ailon, Charikar, and Newman [7] demonstrated connections to other well known problems such
as Rank Aggregation, their algorithm is simple and obtains an expected constant approximation
guarantee (via linear programming duality). In addition to [7], other results include [4, 8].
A commonality of existing algorithms for Ensemble Clustering [3, 2, 9] is that they employ a graph
construction, as a first step. Element pairs (cluster pairs or item pairs) are then evaluated and their
edges are assigned a weight that reflects their similarity. A natural question relates to whether we can
find a better representation of the available information. This will be the focus of the next section.
2
Key Observations: Two is a company, is three a crowd?
Consider an example where one is ?aggregating? recommendations made by a group of family and
friends for dinner table seating assignments at a wedding. The hosts would like each ?table? to be
able to find a common topic of dinner conversation. Now, consider three persons, Tom, Dick, and
Harry invited to this reception. Tom and Dick share a common interest in Shakespeare, Dick and
Harry are both surfboard enthusiasts, and Harry and Tom attended college together. Because they
had strong pairwise similarities, they were seated together but had a rather dull evening.
A simple analysis shows that the three guests had strong common interests when considered two at
a time, but there was weak communion as a group. The connection of this example to the ensemble
clustering problem is clear. Existing algorithms represent the similarity between elements in D as
a scalar value assigned to the edge joining their corresponding nodes in the graph. This weight
is essentially a ?vote? reflecting the number of algorithms that assigned those two elements to the
same cluster. The mechanism seems perfect until we ask if strong pairwise coupling necessarily
implies coupling for a larger group as well. The weight metric considering two elements does not
retain information about which algorithms assigned them together. When expanding the group to
include more elements, one is not sure if a common feature exists under which the larger group is
similar. It seems natural to assign a higher priority to triples or larger groups of people that were
recommended to be seated together (must be similar under at least one feature) compared to groups
that were never assigned to the same table by any person in the recommendation group (clustering
algorithm), notwithstanding pairwise evaluations, for an illustrative example see [10]. While this
problem seems to be a distinctive disadvantage for only the IBGF approach; it also affects the CBGF
approach. This can be seen by looking at clusters as items and the Jaccard?s similarity measure as
the vote (weight) on the edges.
3
Main Ideas
To model the intuition above, we generalize the similarity metric to maximize similarity or ?agreement? by an appropriate encoding of the solutions obtained from individual clustering algorithms.
More precisely, in our generalization the similarity is no longer just a scalar value but a 2D string.
The ensemble clustering problem thus reduces to a form of string clustering problem where our
objective is to assign similar strings to the same cluster.
The encoding into a string is done as follows. The data item set is given as D with |D| = n. Let
m be the number of clustering algorithms with each solution having no more than k clusters. We
represent all input information (ensemble) as a single 3D matrix, A ? <n?m?k . For every data
element dl ? D, Al ? <m?k is a matrix whose elements are defined by
1 if dl is assigned to cluster i by Cj ;
(1)
Al (i, j) =
0
otherwise
It is easy to see that the summation of every row of Al equals 1. We call each Al an A-string. Our
goal is to cluster the elements D = {d1 , d2 , . . . , dn } based on the similarity of their A-strings.
We now consider how to compute the clusters based on the similarity (or dissimilarity) of strings. We
note that the paper [11] by Gasieniec et al., discussed the so-called Hamming radius p-clustering and
Hamming diameter p-clustering problems on strings. Though their results shed considerable light
on the hardness of string clustering with the selected distance measures, those techniques cannot be
directly applied to the problem at hand because the objective here is fairly different from the one in
[11]. Fortunately, our analysis reveals that a simpler objective is sufficient to capture the essence of
similarity maximization in clusters using certain special properties of the A-strings.
Our approach is partly inspired by the classical k-means clustering where all data points are assigned
to the cluster based on the shortest distance to the cluster center. Imagine an ideal input instance
for the ensemble clustering problem (all clustering algorithms behave similarly) ? one with only k
unique members among n A-strings. The partitioning simply assigns similar strings to the same
partition. The representative for each cluster will then be exactly like its members, is a valid Astring, and can be viewed as a center in a geometric sense. General input instances will obviously
be non-ideal and are likely to contain far more than k unique members. Naturally, the centers of the
clusters will vary from its members. This variation can be thought of as noise or disagreement within
the clusters, our objective is to find a set of clusters (and centers) such that the noise is minimized
and we move very close to the ideal case. To model this, we consider the centers to be in the same
high dimensional space as the A-strings in D (though it may not belong to D). Consider an example
where a cluster i in this optimal solution contains items (d1 , d2 , . . . , d7 ). A certain algorithm Cj
in the input ensemble clusters items (d1 , d2 , d3 , d4 ) in cluster s and (d5 , d6 , d7 ) in cluster p. How
would Cj behave if evaluating the center of cluster i as a data item? The probability it assigns the
center to cluster s is 4/7 and the probability it assigns the center to cluster p is 3/7. If we emulate this
logic ? we must pick the choice with the higher probability and assign the center to such a cluster.
It can be verified that this choice minimizes the dissent of all items in cluster i to the center. The Astring for the center of cluster i will have a ?1? at position (j, s). The assignment of A-string (items)
to clusters is unknown; however, if it were somehow known, we could find the centers for all other
clusters i ? [1, k] by computing the average value at every cell of the A matrices corresponding to
the members of the cluster and rounding the largest value in every row to 1 (rest to 0) and assigning
this as the cluster center. Hence, the dissent within a cluster can be quantified simply by averaging
the matrices of elements that belong to the cluster and computing the difference to the center. Our
goal is to find such an assignment and group the A-strings so that the sum of the absolute differences
of the averages of clusters to their centers (dissent) is minimized. In the subsequent sections, we will
introduce our optimization framework for ensemble clustering based on these ideas.
4
Integer Program for Model 1
We start with a discussion of an Integer Program (IP, for short) formulation for ensemble clustering.
For convenience, we denote the final clustering solution by C ? = {C1? , . . . , Ck? } and Cij denotes
the cluster i by the algorithm j. The variables that constitute the IP are as follows.
1 if dl ? Ci?0 ;
Xli0
=
(2)
0 otherwise
(
T
1 if Ci?0 = arg max {|Ci?0 Cij |}
i=1,...,k
siji0
=
(3)
0
otherwise
We mention that the above definition implies that for a fixed index i0 , its center, siji0 also provides an
indicator to the cluster most similar to Ci?0 in the set of clusters produced by the clustering algorithm
Cj . We are now ready to introduce the following IP.
Pn
k X
k X
m
X
l=1 Alij Xli0
siji0 ? P
(4)
min
n
0
l=1 Xli
0
i=1 j=1
i =1
k
X
s.t.
Xli0 = 1 ?l ? [1, n],
i0 =1
k
X
n
X
Xli0 ? 1 ?i0 ? [1, k],
(5)
l=1
siji0 = 1 ?j ? [1, m], i0 ? [1, k],
Xli0 ? {0, 1},
siji0 ? {0, 1}.
(6)
i=1
(4) minimizes the sum of the difference between siji0 (the center for cluster Ci?0 ) and the average of
all Alij bits of the data elements dl assigned to cluster Ci?0 . Recall that siji0 will be 1 if Cij is the
mostPsimilar cluster to Ci?0 among all the P
clusters produced
by algorithm Cj . Hence, if siji0 = 0
n
n
l=1 Alij Xli0
l=1 Alij Xli0
P
P
0
6= 0, the value siji ?
and
represents the percentage of data elements
n
n
l=1 Xli0
l=1 Xli0
in Ci?0 that do not consent with the majority of the other elements in the group w.r.t. the clustering
solution provided by Cj . In other words, we are trying to minimize the dissent and maximize the
consent simultaneously. The remaining constraints are relatively simple ? (5) enforces the condition
that a data element should belong to precisely one cluster in the final solution and that every cluster
must have size at least 1; (6) ensures that siji0 is an appropriate A-string for every cluster center.
5
0-1 Semidefinite Program for Model 1
The formulation given by (4)-(6) is a mixed integer program (MIP, for short) with a nonlinear objective function in (4). Solving this model optimally, however, is extremely challenging ? (a) the
constraints in (5)-(6) are discrete; (b) the objective is nonlinear and nonconvex. One possible way of
attacking the problem is to ?relax? it to some polynomially solvable problems such as SDP (the problem of minimizing a linear function over the intersection of a polyhedron and the cone of symmetric
and positive semidefinite matrices, see [12] for an introduction). Our effort would be to convert the
nonlinear form in (4) into a 0-1 SDP form. By introducing artificial variables, we rewrite (4) as
min
m X
k
k X
X
tiji0
(7)
i=1 j=1 i0 =1
siji0 ? ciji0 ? tiji0 , ciji0 ? siji0 ? tiji0 ?i, i0 , j,
where the term ciji0 represents the second term in (4) defined by
Pn
0
l=1 Alij Xli
?i, i0 , j.
ciji0 = P
n
0
X
li
l=1
Since both Alij and Xli0 are binary, (9) can be rewritten as
Pn
2
2
l=1 Alij Xli0
0
ciji = Pn
?i, i0 , j.
2
l=1 Xli0
Let us introduce a matrix variable yi0 ? <n whose lth column is defined by
Xli0
Xli0
(l)
.
yi0 = pPn
=
2
kXi0 k2
l=1 Xli0
(8)
(9)
(10)
(11)
Let Aij ? <n be a vector whose lth element has value Al (i, j). This allows us to represent (10) as
ciji0 = tr(Bij Zi0 ), Zi20 = Zi0 , Zi0 0,
(12)
where Bij = diag(Aij ) is a diagonal matrix with (Bij )ll = Al (i, j), the second and third properties
follow from Zi0 = yi0 yiT0 being a positive semidefinite matrix. Now, we rewrite the constraints for
X in terms of Z. (5) is automatically satisfied by the following constraints on the elements of Zi0 .
n
n
X
X
(ll)
(ll0 )
Zi0 = 1 ?i0 ? [1, k],
Zi0 ? 1 ?i0 ? [1, k], ?l ? [1, n].
(13)
l=1
l0 =1
(uv)
where Zi0 refers to the (u, v) entry of matrix Zi0 . Since Zi0 is a symmetric projection matrix by
construction, (7)-(13) constitute a precisely defined 0-1 SDP that can be expressed in trace form as
min
k
X
tr(diag(Ti0 ek ))
(14)
i0 =1
s.t.
(Qi0 ? Si0 ? Ti0 ) ? 0 ?i0 ? [1, k],
(Si0 ? Ti0 ? Qi0 ) ? 0,
(
k
X
Zi0 )en = en ?i0 ? [1, k],
tr(Zi0 ) = 1 ?i0 ? [1, k],
(15)
tr(
i0 =1
k
X
Zi0 ) = k,
(16)
Si0 ? {0, 1}.
(17)
i0 =1
Si0 ek = em ?i0 ? [1, k],
Z ? 0;
Zi20 = Zi0 ;
Zi0 = ZiT0 ;
n
where Qi0 (i, j) = ciji0 = tr(Bij Zi0 ), and en ? < is a vector of all 1s.
The experimental results for this model indicate that it performs very well in practice (see [10]).
However, because we must solve the model while maintaining the requirement that Si0 be binary
(otherwise, the problem becomes ill-posed), a branch and bound type method is needed. Such
approaches are widely used in many application areas, but its worst case complexity is exponential
in the input data size. In the subsequent sections, we will make several changes to this framework
based on additional observations in order to obtain a polynomial algorithm for the problem.
6
Integer Program and 0-1 Semidefinite Program for Model 2
Recall the definition of the variables ciji0 , which can be interpreted as the size of the overlap between
the cluster Ci?0 in the final solution and Cij , and is proportional to the cardinality of Ci?0 . Let us define
ci? ji0 = max ciji0 .
i=1,...,k
Let us also define vector variables q whose ith element is siji0 ? ciji0 . In the IP model 1, we try
to minimize the sum of all the L1 -norms of qji0 . The main difficulty in the previous formulation
stems from the fact that ciji0 is a fractional function w.r.t the assignment matrix X. Fortunately, we
Pk
note that since entries of ciji0 are fractional satisfying i=1 ciji0 = 1 for any fixed j, i0 , their sum
of squares is maximized when its largest entry is as high as possible. Thus, minimizing the function
Pk
1 ? i=1 (ciji0 )2 is a reasonable substitute to minimizing the sum of the L1 -norms in the IP model
1. The primary advantage of this observation is that we do not need to know the ?index? (i? ) of
the maximal element ci? ji0 . As before, X denotes the assignment matrix. We no longer need the
variable s, as it can be easily determined from the solution. This yields the following IP.
!
m X
n
k
k X
X
X
2
(
Xli0 ) 1 ?
(ciji0 )
(18)
min
ji0
i0 =1 j=1 l=1
s.t.
k
X
i=1
Xli0 = 1 ?l ? [1, n],
i0 =1
n
X
Xli0 ? 1 ?i0 ? [1, k],
Xli0 ? {0, 1}.
(19)
l=1
We next discuss how to transform the above problem to a 0-1 SDP. For this, we first note that the
objective function (18) can be expressed as follows.
!
Pn
k
k X
m
n
2
X
X
X
( l=1 Alij Xli0 )
Pn
min
(
Xli0 ) ?
,
(20)
0
0
l=1 Xli
i=1
i =1 j=1
l=1
which can be equivalently stated as
?
?
Pn
k X
m X
k
2
X
( l=1 Alij Xli0 ) ?
Pn
min ?nm ?
,
0
l=1 Xli
0
j=1 i=1
(21)
i =1
The numerator of the second term above can be rewritten as
n
X
(
Alij Xli0 )2 = (A1ij X1i0 + . . . + Anij Xni0 )2 = (ATij Xi0 )2 = XiT0 Aij ATij Xi0 ,
l=1
(22)
where Xi0 is the i0 th column vector of X. Therefore, the second term of (21) can be written as
k X
m X
k
X
= tr(
i0 =1
= tr(
XiT0 Aij ATij Xi0 (XiT0 Xi0 )?1 )
j=1 i=1
k X
m X
k
X
m X
k
m
X
X
Aij ATij Zi0 ) = tr(
Aij ATij Z) = tr(
Bj Z) = tr(BZ). (23)
i0 =1 j=1 i=1
j=1 i=1
j=1
Pk
Pm
In (23), Zi0 = Xi0 (XiT0 Xi0 )?1 XiT0 (same as in IP model 1) and Z = i0 =1 Zi0 and B = j=1 Bj .
Since each matrix Zi0 is a symmetric projection matrix and Xi01 and Xi02 are orthogonal to each other
when i01 6= i02 , Z is a projection matrix of the form X(X T X)?1 X. The last fact also used in [13]
is originally attributed to an anonymous referee in [14]. Finally, we derive the 0-1 SDP formulation
for the problem (18)-(19) as follows.
(nm ? tr(BZ))
Zen = en ?i0 ? [1, k],
min
s.t.
Z 2 = Z;
Z ? 0;
tr(Z) = k,
(24)
(25)
Z = ZT .
(26)
Relaxing and Solving the 0-1 SDP: The relaxation to (24)-(26) exploits the fact that Z is a projection matrix satisfying Z 2 = Z. This allows replacing the last three constraints in (26) as I Z 0.
By establishing the result that any feasible solution to the second formulation of 0-1 SDP, Z feas is a
rank k matrix, we first solve the relaxed SDP using SeDuMi [15], take the rank k projection of Z ?
and then adopt a rounding based on a variant of the winner-takes-all approach to obtain a solution
in polynomial time. For the technical details and their proofs, please refer to [10].
7
Experimental Results
Our experiments included evaluations on several classification datasets, segmentation databases and
simulations. Due to space limitations, we provide a brief summary here. Our first set of experiments illustrates an application to several datasets from the UCI Machine Learning Repository:
(1) Iris dataset, (2) Soybean dataset and (3) Wine dataset; these include ground truth data, see
http://www.ics.uci.edu/ mlearn/MLRepository.html. To create the ensemble, we used a set of [4, 10]
clustering schemes (by varying the clustering criterion and/or algorithm) from the CLUTO clustering
toolkit. The multiple solutions comprised the input ensemble, our model was then used to determine
a agreement maximizing solution. The ground-truth data was used at this stage to evaluate accuracy
of the ensemble (and individual schemes). The results are shown in Figure 1(a)-(c). For each case,
we can see that the ensemble clustering solution is at least as good as the best clustering algorithm.
Observe, however, that while such results are expected for this and many other datasets (see [3]),
the consensus solution may not always be superior to the ?best? clustering solution. For instance,
in Fig. 1(c) (for m = 7) the best solution has a marginally lower error rate than the ensemble. An
ensemble solution is useful because we do not know a priori that which algorithm will perform the
best (especially if ground truth is unavailable).
cluster ensemble
individual algorithms
0.4
0.3
0.2
0.1
3
4
5
6
7
8
9
10
Number of clustering algorithms in ensemble (m)
11
Mislabelled cases in each algorithm
Mislabelled cases in each algorithm
Mislabelled cases in each algorithm
0.4
0.5
cluster ensemble
individual algorithms
0.3
0.2
0.1
3
4
5
6
7
8
9
10
Number of clustering algorithms in ensemble (m)
11
0.6
cluster ensemble
individual algorithms
0.5
0.4
0.3
0.2
0.1
3
4
5
6
7
8
9
10
Number of clustering algorithms in ensemble (m)
(a)
(b)
(c)
Figure 1: Synergy. The fraction of mislabeled cases ([0, 1]) in a consensus solution (?) is compared to the number of mislabelled cases (?) in individual clustering algorithms. We illustrate the
ensemble effect for the Iris dataset in (a), the Soybean dataset in (b), and the Wine dataset in (c).
11
Our second set of experiments focuses on a novel application of ensembles to the problem of image
segmentation. Even sophisticated segmentation algorithms may yield ?different? results on the same
image, when multiple segmentations are available, it seems reasonable to ?combine? segmentations
to reduce degeneracies. Our experimental results indicate that in many cases, we can obtain a better
overall segmentation that captures (more) details in the images more accurately with fewer outlying
clusters. In Fig. 2, we illustrate the results on an image from the Berkeley dataset. The segmentations were generated using several powerful algorithms including (a) Normalized Cuts, (b) Energy
Minimization by Graph Cuts and (c)?(d) Curve Evolution. Notice that each algorithm performs well
but misses out on some details. For instance, (a) and (d) do not segment the eyes; (b) does well in
segmenting the shirt collar region but can only recognize one of the eyes and (c) creates an additional
cut across the forehead. The ensemble (extreme right) is able to segment these details (eyes, shirt
collar and cap) nicely by combining (a)?(d). For implementation details of the algorithm including
settings, preprocessing and additional evaluations, please refer to [10].
(a)
(b)
(c)
(d)
ensemble
Figure 2: A segmentation ensemble on an image from the Berkeley Segmentation dataset. (a)?(d)
show the individual segmentations overlaid on the input image, the right-most image shows the
segmentation generated from ensemble clustering.
The final set of our experiments were performed on 500 runs of artificially generated cluster ensembles. We first constructed an initial set segmentation, this was then repeatedly permuted (up to 15%)
yielding a set of clustering solutions. The solutions from our model and [3] were compared w.r.t.
our objective functions and Normalized Mutual Information used in [3]. In Figure 3(a), we see that
our algorithm (Model 1) outperforms [3] on all instances. In the average case, the ratio is slightly
more than 1.5. We must note the time-quality trade-off because solving Model 1 requires a branchand-bound approach. In Fig. 3(b), we compare the results of [3] with solutions from the relaxed
SDP Model 2 on (24). We can see that our model performs better in ? 95% cases. Finally, Figure
1(b) shows a comparison of relaxed SDP Model 2 with [3] on the objective function optimized in [3]
(using best among two techniques). We observed that our solutions achieve superior results in 80%
of the cases. The results show that even empirically our objective functions model similarity rather
well, and that Normalized Mutual Information may be implicitly optimized within this framework.
Remarks. We note that the graph partitioning methods used in [3] are typically much faster than the
time needed by SDP solvers (e.g., SeDuMi [15] and SDPT3) for comparable problem sizes. However, given the increasing interest in SDP in the last few years, we may expect the development of
new algorithms, and faster/more efficient software tools.
8
Conclusions
We have proposed a new algorithm for ensemble clustering based on a SDP formulation. Among
the important contributions of this paper is, we believe, the observation that the notion of agreement
in an ensemble can be captured better using string encoding rather than a voting strategy. While
a partition problem defined directly on such strings yields a non-linear optimization problem, we
illustrate a transformation into a strict 0-1 SDP via novel convexification techniques. The last result
of this paper is the design of a modified model of the SDP based on additional observations on the
structure of the underlying matrices. We discuss extensive experimental evaluations on simulations
and real datasets, in addition, we illustrate application of the algorithm for segmentation ensembles.
We feel that the latter application is of independent research interest; to the best of our knowledge,
this is the first algorithmic treatment of generating segmentation ensembles for improving accuracy.
200
500
Worse
Better
Better
Worse
Number of instances
300
100
100
200
50
0
100
0
1
2
3
4
5
Ratios of solutions of objective functions of the two algorithms
Better
150
Number of instances
150
Number of instances
200
Worse
400
0
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Ratios of solutions of objective functions of the two algorithms
50
0
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
Normalized difference
0.06
0.08
0.1
(a)
(b)
(c)
Figure 3: A comparison of [3] with SDP Model 1 in (a), and with SDP Model 2 on (24) in (b).
The solution from [3] was used as the numerator. In (c), comparisons (difference in normalized
values) between our solution and the best among IBGF and CBGF based on the Normalized Mutual
Information (NMI) objective function used in [3].
Acknowledgments. This work was supported in part by NSF grants CCF-0546509 and IIS0713489. The first author was also supported by start-up funds from the Dept. of Biostatistics
and Medical Informatics, UW ? Madison. We thank D. Sivakumar for useful discussions, Johan
L?ofberg for a thorough explanation of the salient features of Yalmip [16], and the reviewers for suggestions regarding the presentation of the paper. One of the reviewers also pointed out a typo in the
derivations in ?6.
References
[1] V. Filkov and S. Skiena. Integrating microarray data by consensus clustering. In Proc. of International
Conference on Tools with Artificial Intelligence, page 418, 2003.
[2] X. Z. Fern and C. E. Brodley. Solving cluster ensemble problems by bipartite graph partitioning. In Proc.
of International Conference on Machine Learning, page 36, 2004.
[3] A. Strehl and J. Ghosh. Cluster Ensembles ? A Knowledge Reuse Framework for Combining Partitionings. In Proc. of AAAI 2002, pages 93?98, 2002.
[4] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. In Proc. Symposium on Foundations of
Computer Science, page 238, 2002.
[5] S. Monti, P. Tamayo, J. Mesirov, and T. Golub. Consensus clustering: A resampling-based method for
class discovery and visualization of gene expression microarray data. Mach. Learn., 52(1-2):91?118,
2003.
[6] A. Gionis, H. Mannila, and P. Tsaparas. Clustering aggregation. In Proc. of International Conference on
Data Engineering, pages 341?352, 2005.
[7] N. Ailon, M. Charikar, and A. Newman. Aggregating inconsistent information: ranking and clustering.
In Proc. of Symposium on Theory of Computing, pages 684?693, 2005.
[8] M. Charikar, V. Guruswami, and A. Wirth. Clustering with qualitative information. J. Comput. Syst. Sci.,
71(3):360?383, 2005.
[9] X. Z. Fern and C. E. Brodley. Random projection for high dimensional data clustering: A cluster ensemble
approach. In Proceedings of International Conference on Machine Learning, 2003.
[10] V. Singh. On Several Geometric Optimization Problems in Biomedical Computation. PhD thesis, State
University of New York at Buffalo, 2007.
[11] L. Gasieniec, J. Jansson, and A. Lingas. Approximation algorithms for hamming clustering problems. In
Proc. of Symposium on Combinatorial Pattern Matching, pages 108?118, 2000.
[12] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York, 2004.
[13] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM Journal
on Optimization, 18(1):186?205, 2007.
[14] A. D. Gordon and J. T. Henderson. An algorithm for euclidean sum of squares classification. Biometrics,
33:355?362, 1977.
[15] J. F. Sturm. Using SeDuMi 1.02, A Matlab Toolbox for Optimization over Symmetric Cones. Optimization Methods and Software, 11-12:625?653, 1999.
[16] J. L?ofberg. YALMIP : A toolbox for modeling and optimization in MATLAB. In CCA/ISIC/CACSD,
September 2004.
| 3283 |@word repository:1 polynomial:3 seems:4 norm:2 yi0:3 surfboard:1 d2:4 tamayo:1 simulation:2 attended:1 pick:1 mention:1 tr:12 initial:1 wedding:1 contains:1 outperforms:1 existing:3 assigning:2 attracted:1 must:5 written:1 subsequent:2 partition:5 shakespeare:1 mislabelled:4 fund:1 resampling:1 intelligence:1 selected:1 fewer:1 item:8 ith:2 short:2 provides:1 cse:2 node:3 simpler:1 dn:3 enterprise:1 c2:1 xit0:5 constructed:1 symposium:3 qualitative:1 combine:2 introduce:3 pairwise:3 peng:2 hardness:1 expected:2 uiuc:1 sdp:19 shirt:2 inspired:1 company:1 automatically:1 considering:2 cardinality:1 becomes:1 provided:1 solver:1 increasing:1 underlying:1 maximizes:2 biostatistics:2 cm:2 consolidated:2 string:19 minimizes:2 interpreted:1 ghosh:2 transformation:1 guarantee:1 mitigate:1 every:6 berkeley:2 voting:3 thorough:1 shed:1 exactly:1 wrong:1 k2:1 partitioning:5 medical:2 grant:1 branchand:1 segmenting:1 positive:2 before:1 engineering:4 aggregating:2 vsingh:1 encoding:4 joining:1 mach:1 establishing:1 sivakumar:1 reception:1 quantified:1 challenging:1 ji0:3 relaxing:1 zi0:20 unique:2 acknowledgment:1 enforces:1 practice:1 mannila:1 area:1 thought:1 projection:6 matching:1 word:1 integrating:1 refers:1 boyd:1 cannot:1 close:1 convenience:1 www:1 demonstrated:1 center:18 maximizing:1 reviewer:2 attention:2 independently:1 convex:1 assigns:3 d5:1 vandenberghe:1 notion:3 variation:1 i02:1 feel:1 target:1 construction:2 imagine:1 programming:3 agreement:11 element:19 referee:1 satisfying:2 mukherjee:1 cut:3 database:2 convexification:2 observed:1 capture:2 worst:1 region:1 ensures:1 connected:1 trade:1 disease:2 intuition:1 complexity:1 ti0:3 singh:2 solving:4 rewrite:2 segment:2 distinctive:1 creates:1 bipartite:1 mislabeled:1 yalmip:2 easily:1 represented:1 emulate:1 derivation:1 artificial:2 newman:2 aggregate:1 crowd:1 quite:1 whose:4 larger:3 solve:2 posed:1 widely:1 relax:1 otherwise:4 transform:1 final:5 ip:7 obviously:1 jinhui:2 advantage:1 mesirov:1 maximal:1 uci:2 combining:2 consent:2 achieve:1 cluster:62 requirement:1 generating:1 perfect:1 derive:2 friend:1 coupling:2 illustrate:4 received:1 strong:3 indicate:3 implies:2 differ:1 radius:1 subsequently:1 qi0:3 assign:4 generalization:2 anonymous:1 summation:1 assisted:2 considered:1 ground:3 ic:1 overlaid:1 algorithmic:1 bj:2 vary:1 commonality:1 adopt:1 wine:2 proc:7 combinatorial:1 si0:5 individually:1 largest:2 create:1 successfully:1 tool:2 reflects:2 minimization:1 always:1 modified:1 rather:4 ck:1 pn:8 dinner:2 varying:1 l0:1 focus:2 improvement:1 rank:3 polyhedron:1 industrial:1 sense:1 i0:26 typically:1 transformed:1 wij:2 arg:1 among:9 ill:1 overall:1 classification:3 priori:1 html:1 development:1 special:1 fairly:1 mutual:3 equal:1 construct:2 never:1 nicely:1 having:1 jiming:1 represents:3 minimized:2 gordon:1 primarily:1 employ:1 few:1 simultaneously:1 recognize:1 individual:9 skiena:1 interest:4 evaluation:5 golub:1 henderson:1 monti:1 extreme:1 semidefinite:7 light:1 yielding:1 edge:4 sedumi:3 orthogonal:1 biometrics:1 euclidean:1 mip:1 theoretical:1 mk:1 instance:9 column:2 modeling:1 disadvantage:1 assignment:5 maximization:2 introducing:1 entry:3 comprised:1 rounding:2 optimally:1 person:2 international:4 siam:1 retain:1 off:1 informatics:2 together:5 thesis:1 aaai:1 satisfied:1 nm:2 zen:1 soybean:2 priority:1 worse:3 ek:2 li:1 feas:1 syst:1 harry:3 gionis:1 ranking:1 performed:1 try:1 start:2 aggregation:2 contribution:1 minimize:2 square:2 accuracy:2 ensemble:42 yield:4 maximized:2 generalize:1 weak:1 xli:4 accurately:1 produced:2 fern:2 marginally:1 biostat:1 mlearn:1 definition:2 i01:1 energy:1 frequency:1 naturally:1 proof:1 di:3 attributed:1 hamming:3 degeneracy:2 dataset:8 treatment:1 popular:1 ask:1 recall:2 knowledge:2 conversation:1 fractional:2 cap:1 segmentation:16 cj:6 sophisticated:1 jansson:1 reflecting:1 higher:2 originally:1 follow:1 tom:3 wei:1 formulation:9 evaluated:1 done:1 though:2 anij:1 just:1 stage:1 biomedical:1 correlation:2 until:1 hand:1 sturm:1 replacing:1 nonlinear:4 somehow:1 cmk:1 quality:1 believe:1 effect:1 contain:1 normalized:6 ccf:1 evolution:1 hence:2 assigned:8 dull:1 symmetric:4 satisfactory:1 ll:2 numerator:2 please:2 essence:1 illustrative:1 mlrepository:1 d4:1 iris:2 criterion:1 trying:1 bansal:2 performs:3 l1:2 image:9 novel:3 recently:1 common:5 superior:2 permuted:1 empirically:1 winner:1 discussed:1 belong:3 xi0:7 forehead:1 refer:2 cambridge:1 collar:2 uv:1 pm:1 similarly:1 pointed:1 illinois:1 dj:3 had:3 toolkit:1 similarity:14 longer:2 recent:1 scenario:1 certain:2 nonconvex:1 binary:2 captured:2 seen:1 fortunately:2 relaxed:4 additional:4 employed:1 c11:1 determine:1 maximize:3 shortest:1 recommended:1 signal:1 attacking:1 relates:1 multiple:5 d7:2 branch:1 reduces:1 stem:1 champaign:1 technical:1 faster:2 host:1 variant:2 circumstance:1 essentially:1 metric:2 bz:2 represent:3 cell:1 c1:3 addition:2 microarray:3 invited:1 rest:1 typo:1 strict:2 sure:1 member:5 inconsistent:1 call:1 integer:4 alij:10 ideal:3 easy:1 affect:1 imperfect:1 idea:3 reduce:1 regarding:1 whether:1 expression:1 sdpt3:1 guruswami:1 reuse:1 effort:1 york:4 constitute:2 repeatedly:1 remark:1 matlab:2 useful:2 ofberg:2 clear:1 amount:1 diameter:1 http:1 percentage:1 nsf:1 notice:1 diagnosis:2 discrete:1 group:11 key:1 salient:1 blum:2 d3:1 wisc:1 verified:1 uw:1 graph:11 relaxation:1 fraction:1 year:1 sum:6 cone:2 convert:1 run:1 uncertainty:1 powerful:1 family:1 reasonable:2 jaccard:2 comparable:1 bit:1 bound:2 cca:1 correspondence:1 precisely:3 constraint:5 software:2 min:7 extremely:1 relatively:1 charikar:3 ailon:2 poor:1 across:1 slightly:1 em:1 nmi:1 partitioned:1 lopamudra:1 visualization:1 discus:3 mechanism:1 needed:2 know:2 available:3 rewritten:2 observe:1 appropriate:2 disagreement:1 chawla:2 vikas:1 substitute:1 denotes:3 clustering:59 include:3 remaining:1 maintaining:1 madison:2 exploit:1 especially:2 approximating:1 classical:1 objective:14 move:1 question:1 strategy:3 primary:1 diagonal:1 september:1 distance:2 thank:1 sci:1 d6:1 majority:1 seating:1 topic:1 evaluate:1 consensus:4 index:2 relationship:1 ratio:4 dick:3 guest:1 minimizing:3 equivalently:1 cij:5 trace:1 stated:1 implementation:1 design:1 zt:1 unknown:1 perform:1 ppn:1 observation:5 datasets:4 urbana:1 buffalo:5 behave:2 looking:1 community:1 introduced:1 atij:5 pair:4 toolbox:2 extensive:1 connection:2 optimized:2 able:3 usually:1 pattern:1 program:7 max:2 including:2 explanation:1 overlap:1 natural:2 difficulty:1 solvable:2 indicator:1 scheme:2 brief:1 eye:3 brodley:2 ready:1 kxi0:1 review:1 literature:1 geometric:2 discovery:1 wisconsin:1 fully:1 expect:1 mixed:1 interesting:1 limitation:1 proportional:1 suggestion:1 triple:1 foundation:1 degree:1 sufficient:1 cluto:1 seated:2 share:1 strehl:2 row:2 summary:1 supported:2 last:4 jth:1 aij:6 tsaparas:1 absolute:1 curve:1 world:1 valid:1 evaluating:1 author:1 made:1 preprocessing:1 outlying:1 far:1 polynomially:1 dissent:4 obtains:1 implicitly:1 logic:1 synergy:1 gene:1 reveals:1 evening:1 table:3 learn:1 johan:1 expanding:1 unavailable:1 improving:1 necessarily:1 artificially:1 diag:2 pk:3 main:2 noise:3 fair:1 xu:1 fig:3 representative:1 en:4 a1ij:1 position:1 exponential:1 comput:1 third:1 wirth:1 bij:4 dl:4 exists:1 ci:13 phd:1 dissimilarity:1 notwithstanding:1 illustrates:1 intersection:1 simply:2 likely:1 ll0:1 expressed:2 scalar:2 recommendation:2 truth:3 lth:2 goal:2 viewed:1 presentation:1 consequently:1 shared:4 considerable:2 change:1 feasible:1 included:1 determined:1 averaging:1 miss:1 called:3 duality:1 partly:1 experimental:4 vote:2 formally:1 college:1 people:1 latter:1 arises:1 cacsd:1 dept:1 d1:5 |
2,518 | 3,284 | Gaussian Process Models for
Link Analysis and Transfer Learning
Kai Yu
NEC Laboratories America
Cupertino, CA 95014
Wei Chu
Columbia University, CCLS
New York, NY 10115
Abstract
This paper aims to model relational data on edges of networks. We describe appropriate Gaussian Processes (GPs) for directed, undirected, and bipartite networks.
The inter-dependencies of edges can be effectively modeled by adapting the GP
hyper-parameters. The framework suggests an intimate connection between link
prediction and transfer learning, which were traditionally two separate research
topics. We develop an efficient learning algorithm that can handle a large number
of observations. The experimental results on several real-world data sets verify
superior learning capacity.
1
Introduction
In many scenarios the data of interest consist of relational observations on the edges of networks.
Typically, a given finite collection of such relational data can be represented as an M ? N matrix
Y = {yi,j }, which is often partially observed because many elements are missing. Sometimes
accompanying Y are attributes of nodes or edges. As an important nature of networks, {yi,j } are
highly inter-dependent even conditioned on known node or edge attributes. The phenomenon is
extremely common in real-world data, for example,
? Bipartite Graphs. The data represent relations between two different sets of objects or
measurements under a pair of heterogeneous conditions. One notable example is transfer
learning, also known as multi-task learning, which jointly learns multiple related but different predictive functions based on the M ? N observed labels Y, namely, the results of
N functions acting on a set of M data examples. Collaborative filtering is an important
application of transfer learning that learns many users? interests on a large set of items.
? Undirected and Directed Graphs. The data are measurements of existences, strengths, and
types of links between a set of nodes in a graph, where a given collection of observations
are an M ?M (in this case N = M ) matrix Y, which can be symmetric or asymmetric, depending on whether the links are undirected or directed. Examples include protein-protein
interactions, social networks, citation networks, and hyperlinks on the WEB. Link prediction aims to recover those missing measurements in Y, for example, predicting unknown
protein-protein interactions based on known interactions.
The goal of this paper is to design a Gaussian process (GP) [13] framework to model the dependence structure of networks, and to contribute an efficient algorithm to learn and predict large-scale
relational data. We explicitly construct a series of parametric models indexed by their dimensionality, and show that in the limit we obtain nonparametric GP priors consistent with the dependence
of edge-wise measurements. Since the kernel matrix is on a quadratic number of edges and the
computation cost is even cubic of the kernel size, we develop an efficient algorithm to reduce the
computational complexity. We also demonstrate that transfer learning has an intimate connection to
link prediction. Our method generalizes several recent transfer learning algorithms by additionally
learning a task-specific kernel that directly expresses the dependence between tasks.
1
The application of GPs to learning on networks or graphs has been fairly recent. Most of the work
in this direction has focused on GPs over nodes of graphs and targeted at the classification of nodes
[20, 6, 10]. In this paper, we regard the edges as the first-class citizen and develop a general GP
framework for modeling the dependence of edge-wise observations on bipartite, undirected and
directed graphs. This work extends [19], which built GPs for only bipartite graphs and proposed
an algorithm scaling cubically to the number of nodes. In contrast, the work here is more general
and the algorithm scales linearly to the number of edges. Our study promises a careful treatment to
model the nature of edge-wise observations and offers a promising tool for link prediction.
2
2.1
Gaussian Processes for Network Data
Modeling Bipartite Graphs
We first review the edge-wise GP for bipartite graphs [19], where each observation is a measurement
on a pair of objects of different types, or under a pair of heterogenous conditions. Formally, let U
and V be two index sets, then yi,j denotes a measurement on edge (i, j) with i ? U and j ? V.
In the context of transfer learning, the pair involves a data instance i and a task j, and yi,j denotes
the label of data i within task j. The probabilistic model assumes that yi,j are noisy outcomes of a
real-valued function f : U ? V ? R, which follows a Gaussian process GP(b, K), characterized by
mean function b and covariance (kernel) function between edges
K ((i, j), (i0 , j 0 )) = ?(i, i0 )?(j, j 0 )
(1)
where ? and ? are kernel functions on U and V, respectively. As a result, the realizations of f on a
finite set i = 1, . . . , M and j = 1, . . . , N form a matrix F, following a matrix-variate normal distribution NM ?N (B, ?, ?), or equivalently a normal distribution N (b, K) with mean b = vec(B)
and covariance K = ? ? ?, where ? means Kronecker product. The dependence structure of edges
is decomposed into the dependence of nodes. Since a kernel is a notion of similarity, the model expresses a prior belief ? if node i is similar to node i0 and node j is similar node j 0 , then so are f (i, j)
and f (i0 , j 0 ).
It is essential to learn the kernels ? and ? based on the partially observed Y, in order to capture the
dependence structure of the network. For transfer learning, this means to learn the kernel ? between
data instances and the kernel ? between tasks. Having ? and ? is it then possible to predict those
missing yi,j based on known observations by using GP inference.
PD
iid
Theorem 2.1 ([19]). Let f (i, j) = D?1/2 k=1 gk (i)hk (j) + b(i, j), where gk ? GP(0, ?) and
iid
hk ? GP(0, ?), then f ? GP(b, K) in the limit D ? ?, and the covariance between pairs is
K ((i, j), (i0 , j 0 )) = ?(i, i0 )?(j, j 0 ).
Theorem (2.1) offers an alternative view to understand the model. The edge-wise function f can be
?
decomposed into a product of two sets of intermediate node-wise functions, {gk }?
k=1 and {hk }k=1 ,
which are i.i.d. samples from two GP priors GP(0, ?) and GP(0, ?). The theorem suggests that the
GP model for bipartite relational data is a generalization of a Bayesian low-rank matrix factorization
F = HG> + B, under the prior H ? NM ?D (0, ?, I) and G ? NN ?D (0, ?, I). When D is finite,
the elements of F are not Gaussian random variables.
2.2
Modeling Directed and Undirected Graphs
In this section we model observations on pairs of nodes of the same set U. This case includes both
directed and undirected graphs. It turns out that the directed graph is relatively easy to handle while
deriving a GP prior for undirected graphs is slightly non-trivial. For the case of directed graphs, we
let the function f : U ? U ? R follow GP(b, K), where the covariance function between edges is
K ((i, j), (i0 , j 0 )) = C(i, i0 )C(j, j 0 )
(2)
and C : U ? U ? R is a kernel function between nodes. Since a random function f drawn from the
GP is generally asymmetric (even if b is symmetric), namely f (i, j) 6= f (j, i), the direction of edges
can be modeled. The covariance function Eq. (2) can be derived from Theorem (2.1) by setting
that {gk } and {hk } are two independent sets of functions i.i.d. sampled from the same GP prior
2
GP(0, C), modeling the situation that each node?s behavior as a sender is different but statistically
related to it?s behavior as a receiver. This is a reasonable modeling assumption. For example, if two
papers cite a common set of papers, their are also likely to be cited by a common set of other papers.
For the case of undirected graphs, we need to design a GP that ensures any sampled function to
be symmetric. Following the construction of GP in Theorem (2.1), it seems that f is symmetric if
gk ? hk for k = 1, . . . , D. However a calculation reveals that f is not bounded in the limit D ? ?.
Theorem (2.2) shows that the problem can be solved by subtracting a growing quantity D1/2 C(i, j)
as D ? ?, and suggests the covariance function
K ((i, j), (i0 , j 0 )) = C(i, i0 )C(j, j 0 ) + C(i, j 0 )C(j, i0 ).
(3)
With such covariance function , f is ensured to be symmetric because the covariance between f (i, j)
and f (j, i) equals the variance of either.
PD
iid
Theorem 2.2. Let f (i, j) = D?1/2 k=1 tk (i)tk (j)+b(i, j)?D1/2 C(i, j), where tk ? GP(0, C),
then f ? GP(b, K) in the limit D ? ?, and the covariance between pairs is K ((i, j), (i0 , j 0 )) =
C(i, i0 )C(j, j 0 ) + C(i, j 0 )C(j, i0 ). If b(i, j) = b(j, i), then f (i, j) = f (j, i).
Proof. Without loss of generality, let b(i, j) ? 0. Based on the central limit theorem, for every (i, j),
f (i, j) converges to a zero-mean Gaussian random variable as D ? ?, because {tk (i)tk (j)}D
k=1
is a collection of random variables independently following the same distribution, and has the mean
P
D
1
0
0
C(i, j). The covariance function is Cov(f (i, j), f (i0 , j 0 )) = D
k=1 {E[tk (i)tk (j)tk (i )tk (j )] ?
0
0
0 0
0 0
0
0
C(i, j)E[tk (i )tk (j )] ? C(i , j )E[tk (i)tk (j)] + C(i, j)C(i , j )} = C(i, i )C(j, j ) +
C(i, j 0 )C(j, i0 ) + C(i, j)C(i0 , j 0 ) ? C(i, j)C(i0 , j 0 ) = C(i, i0 )C(j, j 0 ) + C(i, j 0 )C(j, i0 ).
Interestingly, Theorem (2.2) recovers Theorem (2.1) and is thus more general. To see the connection,
let hk ? GP(0, ?) and gk ? GP(0, ?) be concatenated to form a function tk , then we have
tk ? GP(0, C) and the covariance is
?
??(i, j), if i, j ? U,
(4)
C(i, j) = ?(i, j), if i, j ? V,
?
0,
if i, j are in different sets.
For i, i0 ? U and j, j 0 ? V, applying Theorem (2.2) leads to
f (i, j) = D?1/2
D
X
tk (i)tk (j) + b(i, j) ? D1/2 C(i, j) = D?1/2
k=1
D
X
hk (i)gk (j) + b(i, j), (5)
k=1
K ((i, j), (i0 , j 0 )) = C(i, i0 )C(j, j 0 ) + C(i, j 0 )C(j, i0 ) = ?(i, i0 )?(j, j 0 ).
(6)
Theorems (2.1) and (2.2) suggest a general GP framework to model directed or undirected relationships connecting heterogeneous types of nodes. Basically, we learn node-wise covariance functions,
like ?, ?, and C, such that edge-wise covariances composed by Eq. (1), (2), or (3) can explain the
happening of observations yi,j on edges. The proposed framework can be extended to cope with
more complex network data, for example, networks containing both undirected links and directed
links. We will briefly discuss some extensions in Sec. 6.
3
An Efficient Learning Algorithm
We consider the regression case under a Gaussian noise model, and later briefly discuss extensions
to the classification case. Let y = [yi,j ](i,j)?O be the observational vector of length |O|, f be the
corresponding quantities of the latent function f , and K be the |O| ? |O| matrix of K between edges
having observations, computed by Eq. (1)-(3). Then observations on edges are generated by
yi,j = f (i, j) + bi,j + ?i,j
(7)
iid
where f ? N (0, K), ?i,j ? N (0, ? ?1 ), and the mean has a parametric form bi,j = ?i + ?j . In the
directed/undirected graph case we let ?i = ?i for any i ? U . f can be analytically marginalized out,
the marginal distribution of observations is then
p(y|?) = N (y; b, K + ? ?1 I),
3
(8)
where ? = {?, b, K}. The parameters can be estimated by minimizing the penalized negative loglikelihood L(?) = ? ln p(y|?) + `(?) under a suitable regularization `(?). The objective function
has the form:
?
1 ?
|O|
1
log 2? + ln |C| + tr C?1 mm> + `(?),
(9)
L(?) =
2
2
2
where C = K + ? ?1 I, m = y ? b and b = [bi,j ], (i, j) ? O. `(?) will be configured in Sec. 3.1.
Gradient-based optimization packages can be applied to find a local optimum of ?. However the
computation can be prohibitively high when the size |O| of measured edges is very big, because the
memory cost is O(|O|2 ), and the computational cost is O(|O|3 ). In our experiments |O| is about
tens of thousands or even millions. A slightly improved algorithm was introduced in [19], with
a complexity O(M 3 + N 3 ) cubic to the size of nodes. The algorithm employed a non-Gaussian
approximation based on Theorem (2.1) and is applicable to only bipartite graphs.
We reduce the memory and computational cost by exploring the special structure of K as discussed
in Sec. 2 and assume K to be composed by node-wise linear kernels ?(i, i0 ) = hxi , xi0 i, ?(i, i0 ) =
hzj , zj 0 i, and C(i, j) = hxi , xj i, with x ? RL1 and z ? RL2 . The edge-wise covariance is then
? Bipartite Graphs: K ((i, j), (i0 , j 0 )) = hxi ? zj , xi0 ? zj 0 i.
? Directed Graphs: K ((i, j), (i0 , j 0 )) = hxi ? xj , xi0 ? xj 0 i.
? Undirected Graphs: K ((i, j), (i0 , j 0 )) = hxi ? xj , xi0 ? xj 0 i + hxi ? xj , xj 0 ? xi0 i
We turn the problem of optimizing K into the problem of optimizing X = [x1 , . . . , xM ]> and
Z = [z1 , . . . , zN ]> . It is important to note that in all the cases the kernel matrix has the form
K = UU> , where U is an |O| ? L matrix, L ? |O|, therefore applying the Woodbury identity
C?1 = ?[I?U(U> U+? ?1 I)?1 U> ] can dramatically reduce the computational cost. For example,
in the bipartite graph case and the directed graph case, respectively there are
?
?
?
?
U> = xi ? zj (i,j)?O , and U> = xi ? xj (i,j)?O ,
(10)
where the rows of U are indexed by (i, j) ? O. For the undirected graph case, we first rewrite the
kernel function
K ((i, j), (i0 , j 0 )) = hxi ? xj , xi0 ? xj 0 i + hxi ? xj , xj 0 ? xi0 i
i
1h
= hxi ? xj , xi0 ? xj 0 i + hxj ? xi , xj 0 ? xi0 i + hxi ? xj , xj 0 ? xi0 i + hxj ? xi , xi0 ? xj 0 i
2
?i
1 h?
(xi ? xj + xj ? xi ), (xi0 ? xj 0 + xj 0 ? xi0 ) ,
(11)
=
2
and then obtain a simple form for the undirected graph case
i
1 h
(12)
U> = ? xi ? xj + xj ? xi
(i,j)?O
2
The overall computational cost is at O(L3 + |O|L2 ). Empirically we found that the algorithm is
efficient to handle L = 500 when |O| is about millions. The gradients with respect to U can be
found in [12]. Further calculation of gradients with respect to X and Z can be easily derived. Here
we omit the details for saving the space. Finally, in order to predict the missing measurements, we
only need to estimate a simple linear model f (i, j) = w> ui,j + bi,j .
3.1
Incorporating Additional Attributes and Learning from Discrete Observations
There are different ways to incorporate node or edge attributes into our model. A common practice
is to let the kernel K, ?, or ? be some parametric function of attributes. One such choice is the RBF
function. However, node or edge attributes are typically local information while the network itself
is rather a global dependence structure, thus the network data often has a large part of patterns that
are independent of those known predictors. In the following, via the example of placing a Bayesian
prior on ? : U ?U ? R, we describe a flexible solution to incorporate additional knowledge. Let ?0
be the covariance that we wish ? to be apriori close to. We apply the prior p(?) = Z1 exp(?? E(?))
and use its negative log-likelihood as a regularization for ?:
?
?i
?h
log |? + ? ?1 I| + tr (? + ? ?1 I)?1 ?0
`(?) = ? E(?) =
(13)
2
4
where ? is a hyperparameter predetermined on validation data, and ? ?1 is a small number to be
optimized. The energy function E(?) is related to the KL divergence DKL (GP(0, ?0 )||GP(0, ? +
? ?1 ?)), where ?(?, ?) is the dirac kernel. If we let ?0 be the linear kernel of attributes, normalized
by the dimensionality, then E(?) can be derived from a likelihood of ? as if each dimension of the
attributes is a random sample from GP(0, ? + ? ?1 ?). If the attributes are nonlinear predictors we
can conveniently set ?0 by a nonlinear kernel. We set ?0 = I if the corresponding attributes are
absent. `(?), `(C) and `(K) can be set in the same way.
The observations can be discrete variables rather than real values. In this case, an appropriate likelihood function can be devised accordingly. For example, the probit function could be employed
as the likelihood function for binary labels, which relates f (i, j) to the target yi,j ? {?1, +1}, by
a cumulative normal ? (yi,j (f (i, j) + bi,j )). To preserve computationally tractability, a family of
inference techniques, e.g. Laplace approximation, can be applied to finding a Gaussian distribution
that approximates the true likelihood. Then, the marginal likelihood (8) can be written as an explicit
expression and the gradient can be derived analytically as well.
4
Discussions on Related Work
Transfer Learning: As we have suggested before, the link prediction for bipartite graphs has a tight
connection to transfer learning. To make it clear, let fj (?) = f (?, j), then the edge-wise function
f : U ? V ? R consists of N node-wise functions fj : U ? R for j = 1, . . . , N . If we fix
?(j, j 0 ) ? ?(j, j 0 ), namely a Dirac delta function, then fj are assumed to be i.i.d. GP functions
from GP(0, ?), where each function corresponds to one learning task. This is the hierarchical
Baysian model?that assumes multiple
tasks sharing the same GP prior [18]. In particular, the negative
?
logarithm of p {yi,j }, {fj }|? is
?
?
N
?
? X
X ?
? 1
N
?
log |?|,
(14)
L {fj }, ? =
l yi,j , fj (i) + f j ??1 f j ? +
2
2
j=1
i?Oj
where l(yi,j , fj (i)) = ? log p(yi,j |fj (i)). The form is close to the recent convex multi-task learning
in a regularization framework [3], if the log-determinant term is replaced by a trace regularization
term ?tr(?). It was proven in [3] that if l(?, ?) is convex with fj , then the minimization of (14)
is convex with jointly {fj } and ?. The GP approach differs from the regularization approach in
two aspects: (1) fj are treated as random variables which are marginalized out, thus we only need to
estimate ?; (2) The regularization for ? is a non-convex log-determinant term. Interestingly, because
log |?| ? tr(?)?M , the trace norm is the convex envelope for the log-determinant, and thus the two
minimization problems are somehow doing similar things. However, the framework introduced in
this paper goes beyond the two methods by introducing an informative kernel ? between tasks. From
a probabilistic modeling point of view, the independence of {fj } conditioned on ? is a restrictive
assumption and even incorrect when some task-specific attributes are given (which means that {fj }
are not exchangeable anymore). The task-specific kernel for transfer learning has been recently
introduced in [4], which however increased the computational complexity by a factor of N 2 . One
contribution of this paper on transfer learning is an algorithm that can efficiently solve the learning
problem with both data kernel ? and task kernel ?.
Gaussian Process Latent-Variable Model (GPLVM): Our learning algorithm is also a generalization of GPLVM. If we enforce ?(j, j 0 ) = ?(j, j 0 ) in the model of bipartite graphs, then the evidence
Eq. (9) is equivalent to the form of GPLVM,
i
1 h
MN
N
log 2? +
ln |(? + ? ?1 I)| + tr (? + ? ?1 I)?1 YY> ,
L(?, ?) =
(15)
2
2
2
where Y is a fully observed M ? N matrix, the mean B = 0, and there is no further regularization
on ?. GPLVM assumes that columns of Y are conditionally independent given ?. In this paper we
consider a situation with complex dependence of edges in network graphs.
Other Related Work: Getoor et al. [7] introduced link uncertainty in the framework of probabilistic
relational models. Latent-class relational models [17, 11, 1] have been popular, aiming to find
the block structure of links. Link prediction was casted as structured-output prediction in [15, 2].
Statistical models based on matrix factorization was studied by [8]. Our work is similar to [8] in the
5
Figure 1: The left-hand side: the subset of the UMist Faces data that contains 10 people at 10 different views.
The blank blocks indicate the ten knocked-off images as test cases; The right-hand side: the ten knocked-off
images (the first row) along with predictive images. The second row is of our results, the third row is of the
MMMF results, and the fourth row is of the bilinear results.
sense that relations are modeled by multiplications of node-wise factors. Very recently, Hoff showed
in [9] that the multiplicative model generalizes the latent-class models [11, 1] and can encode the
transitivity of relations.
5
Numerical Experiments
We set the dimensionality of the model via validation on 10% of training data. In cases that the
additional attributes on nodes or edges are either unavailable or very weak, we compare our method
with max-margin matrix factorization (MMMF) [14] using a square loss, which is similar to singular
value decomposition (SVD) but can handle missing measurements.
5.1
A Demonstration on Face Reconstruction
A subset of the UMist Faces images of size 112 ? 92 was selected to illustrate our algorithm, which
consists of 10 people at 10 different views. We manually knocked 10 images off as test cases, as
presented in Figure 1, and treated each image as a vector that leads to a 103040 ? 10 matrix with
103040 missing values, where each column corresponds a view of faces. GP was trained by setting
L1 = L2 = 4 on this matrix to learn from the appearance relationships between person identity
and pose. The images recovered by GP for the test cases are presented as the second row of Figure
1-right (RMSE=0.2881). The results of MMMF are presented as the third row (RMSE=0.4351). We
also employed the bilinear models introduced by [16], which however does not handle missing data
of a matrix, and put the results at the bottom row for comparison. Quantitatively and perceptually
our model offers a better generalization to unseen views of known persons.
5.2
Collaborative Filtering
Collaborative filtering is a typical case of bipartite graphs, where ratings are measurements on edges
of user-item pairs. We carried out a serial of experiments on the whole EachMovie data, which
includes 61265 users? 2811718 distinct numeric ratings on 1623 movies. We randomly selected
80% of each user?s ratings for training and used the remaining 20% as test cases. The random
selection was carried out 20 times independently.
For comparison purpose, we also evaluated the predictive performance of four other approaches:
1) Movie Mean: the empirical mean of ratings per movie was used as the predictive value of all
users? rating on the movie; 2) User Mean: the empirical mean of ratings per user was used as
the predictive value of the users? rating on all movies; 3) Pearson Score: the Pearson correlation
coefficient corresponds to a dot product between normalized rating vectors. We computed the Gram
matrices of the Pearson score with mean imputation for movies and users respectively, and took
principal components as their individual attributes. We tried 20 or 50 principal components as
attributes in this experiment and carried out least square regression on observed entries. 4) MMMF.
The optimal rank was decided by validation.
6
Table 1: Test results on the EachMovie data. The number in bracket indicates the rank we applied. The
results are averaged over 20 trials, along with the standard deviation. To evaluate accuracy, we utilize root
mean squared error (RMSE), mean absolute error (MAE), and normalized mean squared error, i.e. ,the RMSE
normalized by the standard deviation of observations.
M ETHODS
RMSE
MAE
NMSE
M OVIE M EAN 1.3866?0.0013 1.1026?0.0010 0.7844?0.0012
U SER M EAN 1.4251?0.0011 1.1405?0.0009 0.8285?0.0008
P EARSON (20) 1.3097?0.0012 1.0325?0.0013 0.6999?0.0011
P EARSON (50) 1.3034?0.0018 1.0277?0.0015 0.6931?0.0019
MMMF(3) 1.2245?0.0503 0.9392?0.0246 0.6127?0.0516
MMMF(15) 1.1696?0.0283 0.8918?0.0146 0.5585?0.0286
1.1557?0.0010 0.8781?0.0009 0.5449?0.0011
GP(3)
Table 2: Test results on the Cora data. The classification accuracy rate is averaged over 5 trials, each with 4
folds for training and one fold for test.
M ETHODS
DS
HA
ML
PL
C ONTENT 53.70?0.50 67.50?1.70 68.30?1.60 56.40?0.70
48.90?1.70 65.80?1.40 60.70?1.10 58.20?0.70
L INK
PCA(50) 61.61?1.42 69.36?1.36 70.06?0.90 60.26?1.16
GP(50) 62.10?0.84 75.40?0.80 78.30?0.78 63.25?0.60
The results of these approaches are reported in Table 1. The per-movie average yields much better
results than the per-user average, which is consistent with the findings previously reported by [5].
The improvement is noticeable by using more components of the Pearson score, but not significant.
The generalization performance of our algorithm is better than that of others. T-test showed a significant difference with p-value 0.0387 of GP over MMMF (with 15 dimensions) in terms of RMSE.
It is well worth highlighting another attractiveness of our algorithm ? the compact representation of
factors. On the EachMovie data, there are only three factors that well represent thousands of items
individually. We also trained MMMF with 3 factors as well. Although the three-factor solution GP
found is also accessible to other models, MMMF failed to achieve comparable performance on this
case (i.e., see results of MMMF(3)). In each trial, the number of training samples is around 2.25
million. Our program took about 865 seconds to accomplish 500 L-BFGS updates on all 251572
parameters using an AMD Opteron 2.6GHz processor.
5.3
Text Categorization based on Contents and Links
We used a part of Cora corpus including 751 papers on data structure (DS), 400 papers on hardware
and architecture (HA), 1617 on machine learning (ML) and 1575 on programming language (PL).
We treated the citation network as a directed graph and modeled the link existence as binary labels.
Our model applied the probit likelihood and learned a node-wise covariance function C, L = 50 ?
50, which composes an edge-wise covariance K by Eq. (2). We set the prior covariance C0 by the
linear kernel computed by bag-of-word content attributes. Thus the learned linear features encode
both link and content information, which were then used for document classification. We compare
several other methods that provide linear features for one-against-all categorization using SVM: 1)
CONTENT: bag-of-words features; 2) LINK: each paper?s citation list; 3) PCA: 50 components
by PCA on the concatenation of bag-of-word features and citation list for each paper. We chose
the dimensionality 50 for both GP and PCA, because their performances both saturated when the
dimensionality exceeds 50. We reported results based on 5-fold cross validation in Table 2. GP
clearly outperformed other methods in 3 out of 4 categories. The main reason we believe is that our
approach models the in-bound and out-bound behaviors simultaneously for each paper .
6
Conclusion and Extensions
In this paper we proposed GPs for modeling data living on links of networks. We described solutions to handle directed and undirected links, as well as links connecting heterogenous nodes. This
work paves a way for future extensions for learning more complex relational data. For example, we
can model a network containing both directed and undirected links. Let (i, j) be directed and (i0 , j 0 )
be undirected. Based on the feature representations, Eq.(10)-right
for directed links and Eq.(12)
?
for undirected links, the covaraince is K((i, j), (i0 , j 0 )) = 1/ 2[C(i, i0 )C(j, j 0 ) + C(i, j 0 )C(j, i0 )],
7
which indicates that dependence between a directed link and an undirected link is penalized compared to dependence between two undirected links. Moreover, GPs can be employed to model
multiple networks involving multiple different types of nodes. For each type, we use one node-wise
covariance. Letting covariance between two different types of nodes be zero, we obtain a huge
block-diagonal node-wise covariance matrix, where each block corresponds to one type of nodes.
This big covariance matrix will induce the edge-wise covariance for links connecting nodes of the
same or different types. In the near future it is promising to apply the model to various link prediction
or network completion problems.
References
[1] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, Mixed membership stochastic block
models for relational data with application to protein-protein interactions. Biometrics Society
Annual Meeting, 2006.
[2] S. Andrews and T. Jebara, Structured Network Learning. NIPS Workshop on Learning to
Compare Examples, 2006.
[3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 2007.
[4] E. V. Bonilla, F. V. Agakov, and C. K. I. Williams. Kernel multi-task learning using taskspecific features. International Conferences on Artificial Intelligence and Statistics, 2007.
[5] J. Canny. Collaborative filtering with privacy via factor analysis. International ACM SIGIR
Conference , 2002.
[6] W. Chu, V. Sindhwani, Z. Ghahramani, and S. S. Keerthi. Relational learning with gaussian
processes. Neural Informaiton Processing Systems 19, 2007.
[7] L. Getoor, E. Segal, B. Taskar, and D. Koller. Probabilistic models of text and link structure
for hypertext classification. ICJAI Workshop, 2001.
[8] P. Hoff. Multiplicative latent factor models for description and prediction of social networks.
to appear in Computational and Mathematical Organization Theory, 2007.
[9] P. Hoff. Modeling homophily and stochastic equivalence in symmetric relational data. to
appear in Neural Informaiton Processing Systems 20, 2007.
[10] A. Kapoor, Y. Qi, H. Ahn, and R. W. Picard. Hyperparameter and kernel learning for graph
based semi-supervised classification. Neural Informaiton Processing Systems 18, 2006.
[11] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of
concepts with an infinite relational model. AAAI Conference on Artificial Intelligence, 2006.
[12] N. Lawrence. Gaussian process latent variable models. Journal of Machine Learning Research,
2005.
[13] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT
Press, 2006.
[14] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative
prediction. International Conference on Machine Learning, 2005.
[15] B. Taskar, M. F. Wong, P. Abbeel, and D. Koller. Link prediction in relational data. Neural
Informaiton Processing Systems 16, 2004.
[16] J. B. Tenenbaum and W. T. Freeman. Separating style and content with bilinear models. Neural
Computation, 2000.
[17] Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite hidden relational models. International
Conference on Uncertainty in Artificial Intelligence, 2006.
[18] K. Yu, V. Tresp, and A. Schwaighofer. Learning Gaussian processes from multiple tasks.
International Conference on Machine Learning, 2005.
[19] K. Yu, W. Chu, S. Yu, V. Tresp, and Z. Xu. Stochastic relational models for discriminative link
prediction. Neural Informaiton Processing Systems 19, 2007.
[20] X. Zhu, J. Lafferty, and Z. Ghahramani. Semi-supervised learning: From gaussian fields to
gaussian processes. Technical Report CMU-CS-03-175, Carnegie Mellon University, 2003.
8
| 3284 |@word trial:3 determinant:3 briefly:2 seems:1 norm:1 c0:1 tried:1 covariance:23 decomposition:1 tr:5 series:1 contains:1 score:3 document:1 interestingly:2 blank:1 recovered:1 chu:3 written:1 numerical:1 informative:1 predetermined:1 update:1 intelligence:3 selected:2 item:3 accordingly:1 earson:2 yamada:1 blei:1 node:32 contribute:1 mathematical:1 along:2 incorrect:1 consists:2 privacy:1 inter:2 behavior:3 growing:1 multi:4 freeman:1 decomposed:2 bounded:1 moreover:1 finding:2 every:1 ensured:1 prohibitively:1 ser:1 exchangeable:1 omit:1 appear:2 before:1 local:2 limit:5 aiming:1 bilinear:3 chose:1 studied:1 equivalence:1 suggests:3 factorization:4 bi:5 statistically:1 averaged:2 directed:19 decided:1 woodbury:1 practice:1 block:5 differs:1 pontil:1 empirical:2 adapting:1 word:3 induce:1 griffith:1 protein:6 suggest:1 close:2 selection:1 put:1 context:1 applying:2 wong:1 equivalent:1 missing:7 go:1 williams:2 knocked:3 independently:2 convex:6 focused:1 sigir:1 d1:3 deriving:1 handle:6 notion:1 traditionally:1 rl1:1 laplace:1 construction:1 target:1 user:10 programming:1 gps:6 element:2 asymmetric:2 agakov:1 observed:5 bottom:1 taskar:2 solved:1 capture:1 hypertext:1 thousand:2 ensures:1 pd:2 complexity:3 ui:1 trained:2 rewrite:1 tight:1 predictive:5 bipartite:13 easily:1 represented:1 america:1 various:1 distinct:1 fast:1 describe:2 artificial:3 hyper:1 outcome:1 pearson:4 kai:1 valued:1 solve:1 loglikelihood:1 rennie:1 cov:1 statistic:1 unseen:1 gp:42 jointly:2 noisy:1 itself:1 took:2 reconstruction:1 subtracting:1 interaction:4 product:3 canny:1 realization:1 kapoor:1 achieve:1 description:1 dirac:2 optimum:1 categorization:2 converges:1 object:2 tk:17 depending:1 develop:3 illustrate:1 pose:1 completion:1 measured:1 andrew:1 noticeable:1 eq:7 taskspecific:1 c:1 involves:1 indicate:1 uu:1 direction:2 attribute:15 opteron:1 stochastic:3 observational:1 fix:1 generalization:4 abbeel:1 extension:4 exploring:1 pl:2 accompanying:1 mm:1 around:1 normal:3 exp:1 lawrence:1 predict:3 purpose:1 outperformed:1 applicable:1 bag:3 label:4 individually:1 tool:1 minimization:2 cora:2 mit:1 clearly:1 gaussian:17 aim:2 rather:2 encode:2 derived:4 improvement:1 rank:3 likelihood:7 indicates:2 hk:7 contrast:1 sense:1 inference:2 dependent:1 membership:1 i0:35 cubically:1 typically:2 nn:1 hidden:1 relation:3 koller:2 overall:1 classification:6 flexible:1 special:1 fairly:1 hoff:3 marginal:2 field:1 equal:1 evgeniou:1 construct:1 having:2 saving:1 apriori:1 manually:1 placing:1 ovie:1 yu:5 future:2 others:1 report:1 quantitatively:1 randomly:1 composed:2 preserve:1 divergence:1 simultaneously:1 individual:1 replaced:1 keerthi:1 organization:1 interest:2 huge:1 highly:1 picard:1 saturated:1 bracket:1 hg:1 citizen:1 edge:32 biometrics:1 indexed:2 logarithm:1 instance:2 increased:1 modeling:8 column:2 umist:2 zn:1 cost:6 tractability:1 introducing:1 subset:2 entry:1 deviation:2 predictor:2 hzj:1 reported:3 dependency:1 accomplish:1 mmmf:10 person:2 cited:1 international:5 accessible:1 probabilistic:4 off:3 connecting:3 squared:2 central:1 nm:2 aaai:1 containing:2 style:1 segal:1 bfgs:1 sec:3 includes:2 coefficient:1 configured:1 notable:1 explicitly:1 bonilla:1 later:1 view:6 multiplicative:2 root:1 doing:1 xing:1 recover:1 rmse:6 collaborative:5 contribution:1 square:2 accuracy:2 variance:1 efficiently:1 yield:1 weak:1 bayesian:2 iid:4 basically:1 worth:1 processor:1 composes:1 explain:1 sharing:1 against:1 energy:1 proof:1 recovers:1 sampled:2 treatment:1 popular:1 knowledge:1 dimensionality:5 supervised:2 follow:1 wei:1 improved:1 evaluated:1 generality:1 correlation:1 d:2 hand:2 web:1 nonlinear:2 somehow:1 believe:1 verify:1 normalized:4 true:1 concept:1 analytically:2 regularization:7 symmetric:6 laboratory:1 conditionally:1 transitivity:1 demonstrate:1 l1:1 fj:13 image:7 wise:18 recently:2 superior:1 common:4 empirically:1 homophily:1 million:3 cupertino:1 discussed:1 xi0:13 approximates:1 mae:2 measurement:9 significant:2 mellon:1 vec:1 language:1 dot:1 hxi:10 l3:1 similarity:1 ahn:1 recent:3 showed:2 optimizing:2 scenario:1 binary:2 meeting:1 yi:15 additional:3 employed:4 living:1 semi:2 relates:1 multiple:5 eachmovie:3 exceeds:1 technical:1 characterized:1 calculation:2 offer:3 cross:1 devised:1 serial:1 dkl:1 qi:1 prediction:12 involving:1 regression:2 heterogeneous:2 cmu:1 sometimes:1 represent:2 kernel:24 singular:1 envelope:1 undirected:20 thing:1 lafferty:1 near:1 intermediate:1 easy:1 xj:24 variate:1 independence:1 architecture:1 reduce:3 absent:1 whether:1 expression:1 casted:1 pca:4 york:1 dramatically:1 generally:1 clear:1 nonparametric:1 ten:3 tenenbaum:2 hardware:1 category:1 zj:4 estimated:1 delta:1 per:4 yy:1 discrete:2 hyperparameter:2 promise:1 carnegie:1 express:2 four:1 drawn:1 imputation:1 utilize:1 graph:30 package:1 uncertainty:2 fourth:1 extends:1 family:1 reasonable:1 ueda:1 scaling:1 comparable:1 bound:2 fold:3 quadratic:1 annual:1 strength:1 kronecker:1 aspect:1 extremely:1 relatively:1 structured:2 slightly:2 fienberg:1 ln:3 computationally:1 previously:1 turn:2 discus:2 letting:1 generalizes:2 apply:2 hierarchical:1 appropriate:2 enforce:1 rl2:1 anymore:1 alternative:1 existence:2 assumes:3 denotes:2 include:1 remaining:1 marginalized:2 concatenated:1 restrictive:1 ghahramani:2 society:1 ink:1 objective:1 quantity:2 parametric:3 dependence:11 pave:1 diagonal:1 gradient:4 link:31 separate:1 separating:1 capacity:1 concatenation:1 topic:1 amd:1 kemp:1 trivial:1 reason:1 length:1 modeled:4 index:1 relationship:2 minimizing:1 demonstration:1 equivalently:1 gk:7 trace:2 negative:3 design:2 unknown:1 observation:15 finite:3 gplvm:4 situation:2 relational:15 extended:1 hxj:2 jebara:1 rating:8 introduced:5 pair:8 ccls:1 namely:3 kl:1 connection:4 z1:2 optimized:1 baysian:1 ethods:2 learned:2 heterogenous:2 nip:1 beyond:1 suggested:1 kriegel:1 pattern:1 xm:1 hyperlink:1 program:1 built:1 oj:1 memory:2 max:1 belief:1 including:1 suitable:1 getoor:2 treated:3 predicting:1 mn:1 zhu:1 movie:7 carried:3 columbia:1 tresp:3 text:2 prior:10 review:1 l2:2 multiplication:1 loss:2 probit:2 fully:1 mixed:1 filtering:4 proven:1 srebro:1 validation:4 consistent:2 row:8 penalized:2 rasmussen:1 side:2 understand:1 face:4 absolute:1 ghz:1 regard:1 dimension:2 world:2 cumulative:1 numeric:1 gram:1 collection:3 social:2 cope:1 citation:4 compact:1 ml:2 global:1 reveals:1 receiver:1 corpus:1 assumed:1 xi:8 discriminative:1 latent:6 table:4 additionally:1 promising:2 nature:2 transfer:12 learn:5 ca:1 ean:2 unavailable:1 complex:3 main:1 linearly:1 big:2 noise:1 whole:1 x1:1 nmse:1 xu:2 attractiveness:1 cubic:2 ny:1 wish:1 explicit:1 intimate:2 third:2 learns:2 theorem:13 specific:3 list:2 svm:1 evidence:1 consist:1 essential:1 incorporating:1 workshop:2 effectively:1 airoldi:1 nec:1 perceptually:1 conditioned:2 margin:2 likely:1 sender:1 appearance:1 happening:1 conveniently:1 highlighting:1 failed:1 schwaighofer:1 partially:2 sindhwani:1 cite:1 corresponds:4 acm:1 goal:1 targeted:1 identity:2 careful:1 rbf:1 content:5 typical:1 infinite:2 acting:1 principal:2 experimental:1 svd:1 formally:1 people:2 incorporate:2 evaluate:1 argyriou:1 phenomenon:1 |
2,519 | 3,285 | Linear Programming Analysis of Loopy Belief
Propagation for Weighted Matching
Sujay Sanghavi, Dmitry M. Malioutov and Alan S. Willsky
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Cambridge, MA 02139
{sanghavi,dmm,willsky}@mit.edu
Abstract
Loopy belief propagation has been employed in a wide variety of applications with
great empirical success, but it comes with few theoretical guarantees. In this paper
we investigate the use of the max-product form of belief propagation for weighted
matching problems on general graphs. We show that max-product converges to the
correct answer if the linear programming (LP) relaxation of the weighted matching
problem is tight and does not converge if the LP relaxation is loose. This provides
an exact characterization of max-product performance and reveals connections to
the widely used optimization technique of LP relaxation. In addition, we demonstrate that max-product is effective in solving practical weighted matching problems in a distributed fashion by applying it to the problem of self-organization in
sensor networks.
1
Introduction
Loopy Belief Propagation (LBP) and its variants [6, 9, 13] have been shown empirically to be effective in solving many instances of hard problems in a wide range of fields. These algorithms were
originally designed for exact inference (i.e. calculation of marginals/MAP estimates) in probability
distributions whose associated graphical models are tree-structured. While some progress has been
made in understanding their convergence and accuracy on general ?loopy? graphs (see [8, 12, 13]
and their references), it still remains an active research area.
In this paper we study the application of the widely used max-product form of LBP (or simply
max-product (MP) algorithm), to the weighted matching problem. Given a graph G = (V, E) with
non-negative weights we on its edges e ? E, the weighted matching problem is to find the heaviest
set of mutually disjoint edges (i.e. a set of edges such that no two edges share a node). Weighted
matching is a classic problem that has played a central role in computer science and combinatorial
optimization, with applications in resource allocation, scheduling in communications networks [10],
and machine learning [5]. It has often been perceived to be the ?easiest non-trivial problem?, and
one whose analysis and solution has inspired methods (or provided insights) for a variety of other
problems. Weighted matching thus naturally suggests itself as a good candidate for the study of
convergence and correctness of algorithms like max-product.
Weighted matching can be naturally formulated as an integer program. The technique of linear
programming (LP) relaxation involves replacing the integer constraints with linear inequality constraints. This relaxation is tight if the resulting linear program has an integral optimum. LP relaxation is not always tight for the weighted matching problem. The primary contribution of this paper
is an exact characterization of max-product performance for the matching problem, which also establishes a link to LP relaxation. We show that (i) if the LP relaxation is tight then max-product
1
converges to the correct answer, and (ii) if the LP relaxation is not tight then max-product does not
converge.
Weighted matching is a special case of the weighted b-matching problem, where there can be up to
bi edges touching node i (setting all bi = 1 reduces to simple matching). All the results of this paper
hold for the general case of b-matchings on arbitrary graphs. However, in the interests of clarity, we
provide proofs only for the conceptually easier case of simple matchings where bi = 1. The minor
modifications needed for general b-matchings will appear in a longer publication. In prior work,
Bayati et. al [2] established that max-product converges for weighted matching in bipartite graphs,
and [5] extended this result to b-matching. These results are implied by our result1 , as for bipartite
graphs, the LP relaxation is always tight.
In Section 2 we set up the weighted matching problem and its LP relaxation. We describe the maxproduct algorithm for weighted matching in Section 3. The main result of the paper is established in
Section 4. Finally, in Section 5 we apply b-matching to a sensor-network self-organization problem
and show that max-product provides an effective way to solve the problem in a distributed fashion.
2
Weighted Matching and its LP Relaxation
Suppose that we are given a graph G with weights we , we also positive integers bi for each node
i ? V . A b-matching is any set of edges such that the total number of edges in the set incident
to any node i is at most bi . The weighted b-matching problem is to find the b-matching of largest
weight. Weighted b-matching can be naturally formulated as the following integer program (setting
all bi = 1 gives an integer program for simple matching):
X
IP :
max
we xe ,
e?E
s.t.
X
xe ? bi for all i ? V,
e?Ei
xe ? {0, 1} for all e ? E
Here Ei is the set of edges incident to node i. The linear programming (LP) relaxation of the above
problem is to replace the constraint xe ? {0, 1} with the constraint xe ? [0, 1], for each e ? E. We
denote the corresponding linear program by LP. Throughout this paper, we will assume that LP has
a unique optimum. The LP relaxation is said to be tight if the unique optimum is integral (i.e. one in
which all xe ? {0, 1}). Note that the LP relaxation is not tight in general. Apart from the bipartite
case, the tightness of LP relaxation is a function of both the weights and the graph structure2 .
3
Max-Product for Weighted Matching
We now formulate weighted b-matching on G as a MAP estimation problem by constructing a
suitable probability distribution. This construction is naturally suggested by the form of the integer
program IP. Associate a binary variable xe ? {0, 1} with each edge e ? E, and consider the
following probability distribution:
Y
Y
p(x) ?
?(xEi )
exp(we xe ),
(1)
i?V
e?E
which
contains a factor ?(xEi ) for each node i ? V , the value of which is ?(xEi ) = 1 if
P
e?Ei xe ? bi , and 0 otherwise. Note that we use i to refer both to the nodes of G and factors
of p, and e to refer both to the edges of G and variables of p. The factor ?(xEi ) enforces the constraint that at most one P
edge incident to node i can be assigned the value ?1?. It is easy to see that,
for any x, p(x) = exp( e we xe ) if the set of edges {e|xe = 1} constitute a b-matching in G, and
p(x) = 0 otherwise. Thus the max-weight b-matching of G corresponds to the MAP estimate of p.
1
However, [2] uses a graphical model which is different from ours to represent weighted matching.
A simple example: G is a cycle of length 3, all the bi = 1. If all we = 1, LP relaxation is loose: setting
each xe = 12 is the optimum. However, if instead the weights are {1, 1, 3}, then LP relaxation is tight.
2
2
The factor-graph version of the max-product algorithm [6] passes messages between variables and
the factors that contain them (for the formulation in (1), each variable is a member of exactly two
factors). The output is an estimate x
? of the MAP of p. We now present the max-product update
equations simplified for p in (1). In the following e and (i, j) denote the same edge. Also, for two
sets A and B the set difference is denoted by the notation A ? B.
Max-Product for Weighted Matching
(INIT) Set t = 0 and initialize each message to be uniform.
(ITER) Iteratively compute new messages until convergence as follows:
t
Variable to Factor:
mt+1
e?i [xe ] = exp(xe we ) ? mj?e [xe ]
(
)
Y
t+1
t
Factor to Variable:
mi?e [xe ] = max ?(xEi )
me0 ?i [xe0 ]
xEi ?e
Also, at each t compute beliefs
nte [xe ]
= exp(we xe ) ?
e0 ?Ei ?e
t
mi?e [xe ]
? mtj?e [xe ]
(ESTIM) Upon convergence, output estimate x
?: for each edge set x
?e = 1 if ne [1] > ne [0], and
x
?e = 0 otherwise.
Remark: If the degree |Ei | of a node is large, the corresponding factor ?(xEi ) will depend on many
variables. In general, for very large factors it is intractable to compute the ?factor to variable? update
(and even to store the factors in memory). However, for our problem the special
of ?
? makes
? form
mt
[1]
this step easy even for large degrees: for each edge e ? Ei compute ae = max 1, me?i
. Then,
t
e?i [0]
if all bi = 1, we have that
Y
Y
0 ?
mt+1
mte0 ?i [0]
,
mt+1
max
a
mte0 ?i [0]
e
i?e [1] =
i?e [0] =
0
e ?Ei ?e
e0 ?Ei ?e
e0 ?Ei ?e
The simplification for general b is as follows: let Fe ? Ei ? e be the set of bi variables in Ei ? e
with the largest values of ae , and let Ge ? Ei ? e be the set of bi ? 1 variables with largest ae . Then,
Y
Y
Y
Y
ae0
mte0 ?i [0]
mt+1
ae0
mte0 ?i [0]
,
mt+1
i?e [1] =
i?e [0] =
e0 ?Ge
e0 ?Ei ?e
e0 ?Fe
e0 ?Ei ?e
These updates are linear in the degree |Ei |.
The Computation Tree for Weighted Matching
Our proofs rely on the computation tree interpretation [12, 11] of loopy max-product beliefs, which
we now describe for the special case of simple matching (bi = 1). Recall the variables of p correspond to edges in G, and nodes in G correspond to factors. For any edge e, the computation tree
Te (1) at time 1 is just the edge e, the root of the tree. Both endpoints of the root are leaves. The tree
Te (t) at time t is generated from Te (t ? 1) by adding to each leaf of Te (t ? 1) a copy of each of
its neighbors in G, except for the neighbor that is already present in Te (t ? 1). The weights of the
edges in Te are copied from the corresponding edges in G.
Suppose M is a matching on the original graph G, and Te is a computation tree. Then, the image
of M in Te is the set of edges in Te whose corresponding copy in G is a member of M . We now
illustrate the ideas of this section with a simple example.
Example 1: Consider the figure above. G appears on the left, the numbers are the edge weights and
the letters are node labels. The max-weight matching M ? = {(a, b), (c, d)} is depicted in bold. In
the center plot we show T(a,b) (4), the computation tree at time t = 4 rooted at edge (a, b). Each
node is labeled in accordance to its copy in G. The bold edges in the middle tree depict the matching
which is the image of M ? onto T(a,b) (4). The weight of this matching is 6.6, and it is easy to see
that any matching on T(a,b) (4) that includes the root edge will have weight at most 6.6. On the right
we depict M , the max-weight matching on the tree T(a,b) (4). M has weight 7.3. In this example we
see that even though (a, b) is in the unique optimal matching in G, the beliefs at the root are such
that n4(a,b) [0] > n4(a,b) [1]. Note also that the dotted edges are not an image of any matching in the
original graph G. This example thus illustrates how ?spurious? matchings in the computation tree
can lead to incorrect beliefs, and estimates.
3
b
1.1
a
1
c
d
c
d
c
b
d
a
d
b
c
a
d
1.1
d
4
c
c
d
b
c
1
1
a
b
a
a
a
a
b
d
b
a
a
a
a
b
d
b
a
Main Result: Equivalence of LP Relaxation and Loopy Max-product
In this section we formally state the main result of this paper, and give an outline of the proofs.
Theorem 1 Let G = (V, E) be a graph with nonnegative real weights we on the edges e ? E.
Assume the linear programming relaxation LP has a unique optimal solution. Then, the following
holds:
1. If the LP relaxation is tight, i.e. if the unique solution is integral, then the max-product
converges and the resulting estimate is the optimal matching.
2. If the LP relaxation is not tight, i.e. if the unique solution contains fractional values, then
the max-product does not converge.
The above theorem implies that LP relaxation and Max-product will both succeed, or both fail, on
the same problem instances, and thus are equally powerful for the weighted matching problem. We
now prove the two parts of the theorem. In the interest of brevity and clarity, the theorem and the
proofs are presented for the conceptually easier case of simple matchings, in which all bi = 1. Also,
for the purposes of the proofs we will assume that ?convergence? means that there exists a ? < ?
such that the maximizing assignment arg maxxe nte (xe ) remains constant for all t > ? .
Proof of Part 1: Max-Product is as Powerful as LP Relaxation
Suppose LP has an integral optimum. Consider now the linear-programming dual of LP, denoted
below as DUAL.
X
DUAL :
min
zi
i?V
s.t.
wij ? zi + zj for all (i, j) ? E,
zi ? 0 for all i ? V
The following lemma states that the standard linear programming properties of complimentary
slackness hold in the strict sense for the weighted matching problem (this is a special case of [3,
ex. 4.20]).
Lemma 1 (strict complimentary slackness) If the solution to LPis unique and integral, and M ?
is the optimal matching, then there exists an optimal dual solution z to DUAL such that
1. For all (i, j) ? M ? , we have wij = zi + zj
2. There exists ? > 0 such that for all (i, j) ?
/ M ? we have wij ? zi + zj ? ?
3. if no edge in M ? is incident on node i, then zi = 0
4. zi ? maxe we for all i
Let t ? 2wmax
, where wmax = maxe we is the weight of the heaviest edge, and ? is as in part 2 of
?
Lemma 1 above. Suppose now that there exists an edge e ?
/ M ? for which the belief at time t is
t
t
incorrect, i.e ne [1] > ne [0]. We now show that this leads to a contradiction.
4
Recall that nte [1] > nte [0] means that there is a matching M in Te (t) such that (a) the root e ? M , and
(b) M is a max-weight matching on Te (t). Let MT? be the image of M ? onto Te (t). By definition,
e?
/ MT? . From e, build an alternating path P by successively adding edges as follows: first add e,
then add all edges adjacent to e that are in MT? , then all their adjacent edges that are in M , and so
forth until no more edges can be added ? this will occur either because no edges are available that
maintain the alternating structure, or a leaf of Te (t) has been reached. Note that this will be a path,
because M and MT? are matchings and so any node in Te (t) can have at most one edge adjacent to
it in each of the two matchings.
For illustration, consider Example 1 of section 3. MT? is in the center plot and M is on the right.
The above procedure for building P would yield the path adcabcda that goes from the left-most leaf
to the right-most leaf. It is easy to see that this path alternates between edges in M and MT? .
We now show that w(P ? MT? ) > w(P ? M ). Let z be the dual optimum corresponding to the
? above. Suppose first that neither endpoint of P is a leaf of Te (t). Then, from parts 1 and 3 of
Lemma 1 it follows that
X
X
X
w(P ? MT? ) =
wij =
zi + zj =
zi .
(i,j)?P ?MT?
i?P
(i,j)?P ?MT?
On the other hand, from part 2 of Lemma 1 it follows that
!
?
X
X
X
zi ? ?|P ? M |.
w(P ? M ) =
wij ?
zi + zj ? ? =
(i,j)?P ?M
i?P
(i,j)?P ?M
? MT? )
Now by construction the root e ? P ? S, and hence w(P
> w(P ? M ). A similar argument,
with minor modifications, holds for the case when one or both endpoints of P are leaves of Te . For
these cases we would need to make explicit use of the fact that t ? 2wmax
.
?
We now show that M cannot be an optimal matching in Te (t). We do so by ?flipping? the edges in
P to obtain a matching with higher weight. Specifically, let M 0 = M ? (P ? M ) + (P ? MT? ) be
the matching containing all edges in M except the ones in P , which are replaced by the edges in
P ? MT? . It is easy to see that M 0 is a matching in Te (t), and that w(M 0 ) > w(M ). This contradicts
the choice of M , and shows that for e ?
/ M ? the beliefs satisfy nte [1] ? nte [0] for all t large enough.
This means that the estimate has converged and is correct for e. A similar argument can be used
to show that the max-product estimate converges to the correct answer for e ? M ? as well. Hence
max-product converges globally to the correct M ? .
Proof of Part 2: LP Relaxation is as Powerful as Max-Product
Suppose the optimum solution of LP contains fractional values. We now show that in this case
max-product does not converge. As a first step we have the following lemma.
Lemma 2 If Max-Product converges, the resulting estimate is M ? .
The proof of this lemma uses the result in [12], that states that if max-product converges then the resulting estimates are ?locally optimal?: the posterior probability of the max-product assignment can
not be increased by changing values in any induced subgraph in which each connected component
contains at most one loop. For the weighted matching problem this local optimality implies global
optimality, because the symmetric difference of any two matchings is a union of disjoint paths and
cycles. The above lemma implies that, for the proof of part 2 of the theorem, it is sufficient to show
that max-product does not converge to the correct answer M ? . We do this by showing that for any
given ? , there exists a t ? ? such that the max-product estimate at time t will not be M ? .
We first provide a combinatorial characterization of when the LP relaxation is loose. Let M ? be the
max-weight matching on G. An alternating path in G is a path in which every alternate edge is in
M ? , and each node appears at most once. A blossom is an alternating path that wraps onto itself,
such that the result is a single odd cycle C and a path R leading out of that cycle3 . The importance
of blossoms for matching problems is well-known [4]. A bad blossom is a blossom in which the
edge weights satisfy
w(C ? M ? ) + 2w(R ? M ? ) < w(C ? M ? ) + 2w(R ? M ? ).
3
The path may be of zero length, in which case the blossom is just the odd cycle.
5
b
3
Example: On the right is a bad blossom:
bold edges are in M ? , numbers are edge
weights and alphabets are node labels. Cycle
C in this case is abcdu, and path R is cf ghi.
h
3
a
1
3
0.5
1
f
1
c
g
i
3
u
3
d
A dumbbell is an alternating path that wraps onto itself twice, such that the result is two disjoint odd
cycles C1 and C2 and an alternating path R connecting the two cycles. In a bad dumbbell the edge
weights satisfy
w(C1 ? M ? ) + w(C2 ? M ? ) + 2w(R ? M ? ) < w(C1 ? M ? ) + w(C2 ? M ? ) + 2w(R ? M ? ).
g
b
3
Example: On the right is a bad dumbbell.
Cycles C1 and C2 are abcdu and f ghij, and
(c, f ) is the path R.
3
3
a
3
c
3
3
u
3
d
1
h
f
3
3
j
3
i
Proposition 1 If LP relaxation is loose, then in G there exists either a bad blossom, or a bad
dumbbell.
Proof. The proof of this proposition will appear in a longer version of this paper. (It is also in the
appendix submitted along with the paper).
Suppose now that max-product converges to M ? by iteration ? , and suppose also there exists a bad
blossom B1 in G. For an edge e ? B1 ? M ? consider the computation tree Te (? + |V |) for e at time
? + |V |. Let M be the optimal matching on the tree. From the definition of convergence, it follows
that near the root e, M will be the image of M ? onto Te : for any edge e0 in the tree at distance less
than |V | from the root, e0 ? M if and only if its copy in G is in M ? .
This means that the copies in Te of the edges in B1 will contain an alternating path P in Te : every
alternate edge of P will be in M . For the bad blossom example above, the alternating path is
ihgf cbaudcf ghi (it will go once around the cycle and twice around the path of the blossom). Make
a new matching M 0 on Te (? + |V |) by ?switching? the edges in this path: M 0 = M ? (M ? P ) +
(P ? M ). Then, it is easy to see that
w(M ) ? w(M 0 ) = w(C ? M ? ) + 2w(R ? M ? ) ? w(C ? M ? ) ? 2w(R ? M ? ).
By assumption B1 is a bad blossom, and hence we have that w(M ) < w(M 0 ), which violates the
optimality of M . Thus, max-product does not converge to M ? if there exists a bad blossom. A
similar proof precludes convergence to M ? for the case when there is a bad dumbbell. It follows
from Proposition 1 that if LP relaxation is loose, then max-product cannot converge to M ? .
5
Sensor network self-organization via b-matching
We now consider the problem of sensor network self-organization. Suppose a large number of lowcost sensors are deployed randomly over an area, and as a first step of any communication or remote
sensing application the sensors have to organize themselves into a network [1]. The network should
be connected, and robust to occasional failing links, but at the same time it should be sparse (i.e.
have nodes with small degrees) due to severe limitations on power available for communication.
Simply connecting every pair of sensors that lie within some distance d of each other (close enough
to communicate reliably) may lead to large clusters of very densely connected components, and
nodes with high degrees. Hence, sparser networks with fewer edges are needed [7]. The throughput
of a link drops fast with distance, so the sparse network should mostly contain short edges. The
sparsest connected network is achieved by a spanning tree solution. However, a spanning tree may
have nodes with large degrees, and a single failed link disconnects it. An interesting set of sparse
subgraph constructions with various tradeoffs addressing power efficiency in wireless networks is
proposed in [7].
6
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
?0.2
?0.2
?0.4
?0.4
?0.6
?0.6
?0.8
?0.8
?1
?1
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
(a)
(b)
Figure 1: Network with N = 100 nodes. (a) Nodes within d = 0.5 are connected by an edge. (b)
Sparse network obtained by b-matching with b = 5.
0.12
1.002
N = 50
1
N = 100
0.1
0.998
N = 200
0.996
0.08
0.994
0.06
0.992
BP, b=3
0.99
BP, b=5
0.04
BP, b=10
0.988
LP, b=3
LP, b=5
0.986
0.02
LP, b=10
0.984
0
0
10
20
30
40
50
40
60
60
80
100
120
140
160
180
200
(a)
(b)
Figure 2: (a) Histogram of node degrees versus node density. (b) Average fraction of the LP upper
bound on optimal cost obtained using LP relaxation and max-product.
We consider using b-matching to find a sparse power-efficient subgraph. We assign edge weights to
be proportional to the throughput of the link. For typical sensor network applications the received
power (which can be used as a measure of throughput) decays as d?p with distance, where p ? [2, 4].
We set p = 3 for concreteness, and let the edge weights be we = d?p
e . The b-matching objective
is now to maximize the total throughput (received power) among sparse subgraphs with degree at
most b. We use the max-product algorithm to solve weighted b-matching in a distributed fashion.
For our experiments we randomly disperse N nodes in a square region [?1, 1] ? [?1, 1]. First we
create the adjacency graph for nodes that are close enough to communicate, we set the threshold to
be d = 0.5. In Figure 2(a) we plot the histogram over a 100 trials of resulting node degrees. Clearly,
as N increases, nodes have increasingly higher degrees.
Next we apply max-product (MP) and LP relaxation4 to solve the b-matching objective. As we have
established earlier, the performance of LP relaxation, and hence, of MP for b-matching depends on
the existence of ?bad blossoms?, i.e. odd-cycles where the weights on the edges are quite similar.
We show in simulations that bad blossoms appear rarely for the random graphs and weights in our
construction, and LP-relaxation and MP produce nearly optimal b-matchings. For the cases where
LP relaxation has fractional edges, and MP has oscillating (or non-converged) edges, we erase them
from the final matching and ensure that LP and MP solutions are valid matchings. Also, instead of
comparing LP and MP costs to the optimal b-matching cost, we compare them to the LP upper bound
on the cost (the cost of the fractional LP solution). This avoids the need to find optimal b-matchings.
In figure 1 we plot the dense adjacency graph for N = 100 nodes, and the much sparser b-matching
subgraph with b = 5 obtained by MP. Now, consider figure 2(b). We plot the percentage of the LP
4
LP is not practical for sensor networks, as it is not easily distributed.
7
1
Fraction disconnected
Mean power stretch
0.95
0.9
0
5
10
15
20
25
b=5
5/100
3.64
b=7
0/100
1.45
b = 10
0/100
1.06
30
(a)
(b)
Figure 3: (a) Average fraction of the LP upper bound on optimal cost obtained using T iterations
of max-product. (b) A table showing probability of disconnect, and the power stretch factor for
N = 100 averaged over 100 trials.
upper bound obtained by MP and by rounded LP relaxation. It can be seen that both LP and MP
produce nearly optimal b-matchings, with more than 98 percent of the optimal cost. The percentage
decreases slowly with sensor density (with higher N ), but improves for larger b. An important performance metric for sensor network self-organization is the power-stretch factor5 , which compares
the weights of shortest paths in G to weights of shortest paths in the sparse subgraph. In figure 3(b)
we display the maximum power stretch factor over all pairs of nodes, averaged over 100 trials. For
b = 10 there is almost no loss in power by using the sparse subgraph. A limitation of the b-matching
solution is that connectedness of the subgraph is not guaranteed. In fact, for b = 1 it is always
disconnected. However, as b increases, the graph gets rarely disconnected. In figure 3(b) we display
probability of disconnect over 100 trials. For b = 10 and N = 100 in a longer simulation, the sparse
subgraph got disconnected twice over 500 trials.
In figure 3(a) we study the performance of MP versus the number of iterations. We run MP for a fixed
number of iterations, remove oscillating edges to get a valid matching, and plot the average fraction
of the LP upper bound that the solution gets. We set b = 5, and N = 100. Quite surprisingly, MP
achieves a large percentage of the optimal cost even with as few as 3 iterations. After 20 this figure
exceeds 99 percent.
References
[1] I.F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, ?A survey on sensor networks,? IEEE
Communications Magazine, vol. 40, no. 8, pp. 102?114, Aug. 2002.
[2] M. Bayati, D. Shah, and M. Sharma, ?Maximum weight matching via max-product belief propagation,? in
ISIT, Sept. 2005, pp. 1763 ? 1767.
[3] D. Bertsimas and J. Tsitsiklis. Linear Opitimization. Athena Scientific, 1997.
[4] J. Edmonds, ?Paths, trees and flowers,? Canadian Journal of Mathematics, vol. 17, pp. 449?467, 1965.
[5] B. Huang and T. Jebara, ?Loopy belief propagation for bipartite maximum weight b-matching,? in Artificial
Intelligence and Statistics (AISTATS), March 2007.
[6] F. Kschischang, B. Frey, and H. Loeliger, ?Factor graphs and the sum-product algorithm,? IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498?519, Feb. 2001.
[7] X. Y. Li, P. J. Wan, Y. Wang, and O. Frieder, ?Sparse power efficient topology for wireless networks,? in
Proc. IEEE Hawaii Int. Conf. on System Sciences, Jan. 2002.
[8] D. Malioutov, J. Johnson, and A. Willsky, ?Walk-sums and belief propagation in Gaussian graphical models,? Journal of Machine Learning Research, vol. 7, pp. 2031?2064, Oct. 2006.
[9] J. Pearl. Probabilistic inference in intelligent systems. Morgan Kaufmann, 1988.
[10] L. Tassiulas and A. Ephremides Stability properties of constrained queueing systems and scheduling
policies for maximum throughput in multihop radio networks IEEE Trans. on Automatic Control, vol. 37,
no. 12, Dec. 1992.
[11] S. Tatikonda and M. Jordan, ?Loopy belief propagation and Gibbs measures,? in Uncertainty in Artificial
Intelligence, vol. 18, 2002, pp. 493?500.
[12] Y. Weiss and W. Freeman, ?On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs,? IEEE Trans. on Information Theory, vol. 47, no. 2, pp. 736?744, Feb. 2001.
[13] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Exploring
AI in the new millennium, pages 239?269, 2003.
5
To compute the power-stretch the edges are weighted by d3 , i.e. the power needed to get a fixed throughput.
8
| 3285 |@word trial:5 middle:1 version:2 simulation:2 contains:4 loeliger:1 ours:1 comparing:1 dumbbell:5 remove:1 designed:1 plot:6 update:3 depict:2 drop:1 intelligence:2 leaf:7 fewer:1 short:1 characterization:3 provides:2 node:29 multihop:1 mtj:1 along:1 c2:4 incorrect:2 prove:1 ghi:2 themselves:1 inspired:1 globally:1 freeman:2 erase:1 provided:1 notation:1 easiest:1 complimentary:2 guarantee:1 every:3 exactly:1 me0:1 control:1 appear:3 organize:1 positive:1 accordance:1 local:1 frey:1 switching:1 path:21 connectedness:1 twice:3 equivalence:1 suggests:1 range:1 bi:14 averaged:2 practical:2 unique:7 enforces:1 union:1 procedure:1 jan:1 area:2 empirical:1 got:1 matching:70 get:4 onto:5 cannot:2 close:2 scheduling:2 applying:1 map:4 center:2 maximizing:1 go:2 survey:1 formulate:1 contradiction:1 insight:1 subgraphs:1 classic:1 stability:1 construction:4 suppose:9 magazine:1 exact:3 programming:7 us:2 associate:1 labeled:1 role:1 wang:1 region:1 cycle:10 connected:5 remote:1 decrease:1 depend:1 tight:11 solving:2 upon:1 bipartite:4 efficiency:1 matchings:12 easily:1 various:1 alphabet:1 fast:1 effective:3 describe:2 artificial:2 whose:3 quite:2 widely:2 solve:3 larger:1 tightness:1 otherwise:3 precludes:1 statistic:1 itself:3 ip:2 final:1 product:41 loop:1 subgraph:8 forth:1 convergence:7 cluster:1 optimum:7 produce:2 oscillating:2 converges:9 illustrate:1 odd:4 minor:2 received:2 progress:1 aug:1 involves:1 come:1 implies:3 correct:6 violates:1 adjacency:2 assign:1 generalization:1 proposition:3 isit:1 exploring:1 stretch:5 hold:4 around:2 exp:4 great:1 achieves:1 purpose:1 perceived:1 estimation:1 failing:1 proc:1 combinatorial:2 label:2 radio:1 tatikonda:1 largest:3 correctness:1 create:1 establishes:1 weighted:28 mit:1 clearly:1 sensor:12 always:3 gaussian:1 publication:1 sense:1 inference:2 spurious:1 wij:5 arg:1 dual:6 among:1 denoted:2 constrained:1 special:4 initialize:1 field:1 once:2 throughput:6 nearly:2 sanghavi:2 intelligent:1 few:2 randomly:2 densely:1 replaced:1 maintain:1 organization:5 interest:2 message:3 investigate:1 disperse:1 severe:1 edge:60 integral:5 tree:17 walk:1 e0:9 theoretical:1 instance:2 increased:1 earlier:1 assignment:2 loopy:8 cost:8 addressing:1 uniform:1 johnson:1 frieder:1 answer:4 density:2 probabilistic:1 rounded:1 connecting:2 heaviest:2 central:1 successively:1 containing:1 huang:1 slowly:1 wan:1 hawaii:1 conf:1 leading:1 li:1 bold:3 includes:1 disconnect:3 int:1 satisfy:3 mp:13 depends:1 root:8 reached:1 contribution:1 square:1 accuracy:1 kaufmann:1 correspond:2 yield:1 conceptually:2 malioutov:2 converged:2 submitted:1 definition:2 pp:7 naturally:4 associated:1 proof:12 mi:2 massachusetts:1 recall:2 fractional:4 improves:1 appears:2 originally:1 higher:3 wei:2 formulation:1 though:1 just:2 dmm:1 until:2 hand:1 replacing:1 ei:15 wmax:3 su:1 propagation:10 slackness:2 scientific:1 building:1 contain:3 hence:5 assigned:1 alternating:8 symmetric:1 laboratory:1 iteratively:1 adjacent:3 self:5 rooted:1 outline:1 demonstrate:1 percent:2 image:5 mt:19 empirically:1 endpoint:3 interpretation:1 marginals:1 refer:2 cambridge:1 gibbs:1 ai:1 automatic:1 sujay:1 mathematics:1 longer:3 add:2 feb:2 posterior:1 touching:1 apart:1 store:1 inequality:1 binary:1 success:1 xe:21 seen:1 morgan:1 employed:1 converge:7 maximize:1 shortest:2 sharma:1 xe0:1 ii:1 reduces:1 alan:1 exceeds:1 calculation:1 equally:1 variant:1 ae:3 metric:1 iteration:5 represent:1 histogram:2 achieved:1 dec:1 c1:4 addition:1 lbp:2 pass:1 strict:2 induced:1 member:2 jordan:1 integer:6 near:1 canadian:1 easy:6 enough:3 variety:2 zi:11 topology:1 idea:1 tradeoff:1 constitute:1 remark:1 result1:1 lowcost:1 locally:1 percentage:3 zj:5 dotted:1 disjoint:3 edmonds:1 vol:7 iter:1 threshold:1 queueing:1 clarity:2 changing:1 neither:1 d3:1 graph:18 relaxation:33 concreteness:1 fraction:4 bertsimas:1 sum:2 run:1 letter:1 powerful:3 communicate:2 uncertainty:1 xei:7 throughout:1 almost:1 decision:1 appendix:1 bound:5 guaranteed:1 played:1 simplification:1 copied:1 display:2 nonnegative:1 occur:1 constraint:5 bp:3 argument:2 min:1 optimality:4 structured:1 alternate:3 march:1 disconnected:4 increasingly:1 contradicts:1 lp:51 n4:2 modification:2 resource:1 mutually:1 remains:2 equation:1 loose:5 fail:1 needed:3 ge:2 available:2 yedidia:1 apply:2 occasional:1 structure2:1 shah:1 existence:1 original:2 cf:1 ensure:1 graphical:3 build:1 implied:1 objective:2 already:1 added:1 flipping:1 primary:1 said:1 wrap:2 distance:4 link:5 athena:1 me:1 trivial:1 spanning:2 willsky:3 length:2 illustration:1 mostly:1 fe:2 negative:1 reliably:1 policy:1 upper:5 extended:1 communication:4 arbitrary:2 jebara:1 pair:2 connection:1 established:3 pearl:1 trans:2 suggested:1 below:1 flower:1 program:6 max:49 memory:1 belief:16 power:13 suitable:1 rely:1 millennium:1 technology:1 ne:4 sept:1 prior:1 understanding:2 loss:1 interesting:1 limitation:2 allocation:1 proportional:1 versus:2 bayati:2 incident:4 degree:10 sufficient:1 nte:6 share:1 surprisingly:1 wireless:2 copy:5 tsitsiklis:1 blossom:14 institute:1 wide:2 neighbor:2 sparse:10 distributed:4 valid:2 avoids:1 made:1 simplified:1 transaction:1 dmitry:1 estim:1 global:1 active:1 reveals:1 b1:4 table:1 mj:1 robust:1 kschischang:1 init:1 constructing:1 aistats:1 main:3 dense:1 fashion:3 deployed:1 explicit:1 sparsest:1 candidate:1 lie:1 theorem:5 bad:13 showing:2 sensing:1 decay:1 intractable:1 exists:8 adding:2 importance:1 te:23 illustrates:1 sparser:2 easier:2 depicted:1 simply:2 failed:1 corresponds:1 ma:1 succeed:1 oct:1 formulated:2 replace:1 hard:1 specifically:1 except:2 typical:1 lemma:9 total:2 maxproduct:1 maxe:2 formally:1 rarely:2 brevity:1 ex:1 |
2,520 | 3,286 | Efficient multiple hyperparameter
learning for log-linear models
Chuong B. Do
Chuan-Sheng Foo
Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
{chuongdo,csfoo,ang}@cs.stanford.edu
Abstract
In problems where input features have varying amounts of noise, using distinct
regularization hyperparameters for different features provides an effective means
of managing model complexity. While regularizers for neural networks and support vector machines often rely on multiple hyperparameters, regularizers for
structured prediction models (used in tasks such as sequence labeling or parsing) typically rely only on a single shared hyperparameter for all features. In this
paper, we consider the problem of choosing regularization hyperparameters for
log-linear models, a class of structured prediction probabilistic models which includes conditional random fields (CRFs). Using an implicit differentiation trick,
we derive an efficient gradient-based method for learning Gaussian regularization
priors with multiple hyperparameters. In both simulations and the real-world task
of computational RNA secondary structure prediction, we find that multiple hyperparameter learning can provide a significant boost in accuracy compared to
using only a single regularization hyperparameter.
1
Introduction
In many supervised learning methods, overfitting is controlled through the use of regularization
penalties for limiting model complexity. The effectiveness of penalty-based regularization for a
given learning task depends not only on the type of regularization penalty used (e.g., L1 vs L2 ) [29]
but also (and perhaps even more importantly) on the choice of hyperparameters governing the regularization penalty (e.g., the hyperparameter ? in an isotropic Gaussian parameter prior, ?||w||2 ).
When only a single hyperparameter must be tuned, cross-validation provides a simple yet reliable
procedure for hyperparameter selection. For example, the regularization hyperparameter C in a
support vector machine (SVM) is usually tuned by training the SVM with several different values
of C, and selecting the one that achieves the best performance on a holdout set. In many situations,
using multiple hyperparameters gives the distinct advantage of allowing models with features of
varying strength; for instance, in a natural language processing (NLP) task, features based on word
bigrams are typically noisier than those based on individual word occurrences, and hence should
be ?more regularized? to prevent overfitting. Unfortunately, for sophisticated models with multiple
hyperparameters [23], the na??ve grid search strategy of directly trying out possible combinations of
hyperparameter settings quickly grows infeasible as the number of hyperparameters becomes large.
Scalable strategies for cross-validation?based hyperparameter learning that rely on computing
the gradient of cross-validation loss with respect to the desired hyperparameters arose first in the
neural network modeling community [20, 21, 1, 12]. More recently, similar cross-validation optimization techniques have been proposed for other supervised learning models [3], including support vector machines [4, 10, 16], Gaussian processes [35, 33], and related kernel learning methods [18, 17, 39]. Here, we consider the problem of hyperparameter learning for a specialized class
of structured classification models known as conditional log-linear models (CLLMs), a generalization of conditional random fields (CRFs) [19].
Whereas standard binary classification involves mapping an object x ? X to some binary output
y ? Y (where Y = {?1}), the input space X and output space Y in a structured classification task
generally contain complex combinatorial objects (such as sequences, trees, or matchings). Designing hyperparameter learning algorithms for structured classification models thus yields a number of
unique computational challenges not normally encountered in the flat classification setting. In this
paper, we derive a gradient-based approach for optimizing the hyperparameters of a CLLM using the
loss incurred on a holdout set. We describe the required algorithms specific to CLLMs which make
the needed computations tractable. Finally, we demonstrate on both simulations and a real-world
computational biology task that our hyperparameter learning method can give gains over learning
flat unstructured regularization priors.
2
Preliminaries
Conditional log-linear models (CLLMs) are a probabilistic framework for sequence labeling or parsing problems, where X is an exponentially large space of possible input sequences and Y is an
exponentially large space of candidate label sequences or parse trees. Let F : X ? Y ? Rn be
a fixed vector-valued mapping from input-output pairs to an n-dimensional feature space. CLLMs
T
model the conditional probability of y given x as P (y | x;w) = exp(w
F(x, y))/Z(x) where
P
T
?
(i) (i) m
Z(x) = y? ?Y exp(w F(x, y )). Given a training set T = (x , y ) i=1 of i.i.d. labeled inputoutput pairs drawn from some unknown fixed distribution D over X ? Y, the parameter learning
problem is typically posed as maximum a posteriori (MAP) estimation (or equivalently, regularized
logloss minimization):
!
m
X
1 T
?
(i)
(i)
w = arg min
w Cw ?
log P (y | x ; w) ,
(OPT1)
2
w?Rn
i=1
where 12 wT Cw (for some positive definite matrix C) is a regularization penalty used to prevent
overfitting. Here, C is the inverse covariance matrix of a Gaussian prior on the parameters w.
While a number of efficient procedures exist for solving the optimization problem OPT1 [34, 11],
little attention is usually given to choosing an appropriate regularization matrix C. Generally, C is
parameterized using a small number of free variables, d ? Rk , known as the hyperparameters of the
(i) (i) m
?
model. Given a holdout set H = (?
x , y? ) i=1 of i.i.d. examples drawn from D, hyperparameter
learning itself can be cast as an optimization problem:
m
?
X
minimize ?
log P y?(i) | x
?(i) ; w? (C) .
(OPT2)
d?Rk
i=1
In words, OPT2 finds the hyperparameters d whose regularization matrix C leads the parameter
vector w? (C) learned from the training set to obtain small logloss on holdout data. For many realworld applications, C is assumed to take a simple form, such as a scaled identity matrix, CI. While
this parameterization may be partially motivated by concerns of hyperparameter overfitting [28],
such a choice usually stems from the difficulty of hyperparameter inference.
In practice, grid-search procedures provide a reliable method for determining hyperparameters to low-precision: one trains the model using several candidate values of C (e.g., C ?
. . . , 2?2 , 2?1 , 20 , 21 , 22 , . . . ), and chooses the C that minimizes holdout logloss. While this strategy is suitable for tuning a single model hyperparameter, more sophisticated strategies are necessary
when optimizing multiple hyperparameters.
3
Learning multiple hyperparameters
In this section, we lay the framework for multiple hyperparameter learning by describing a simple
yet flexible parameterization of C that arises quite naturally in many practical problems. We then
describe a generic strategy for hyperparameter adaptation via gradient-based optimization.
Consider a setting in which predefined subsets of parameter components (which we call regularization groups) are constrained to use the same hyperparameters [6]. For instance, in an
NLP task, individual word occurrence features may be placed in a separate regularization group
from word bigram features. Formally, let k be a fixed number of regularization groups, and let
? : {1, . . . , n} ? {1, . . . , k} be a prespecified mapping from parameters to regularization groups.
Furthermore, for a vector x ? Rk , define its expansion x ? Rn as x = (x?(1) , x?(2) , . . . , x?(n) ).
In the sequel, we parameterize C ? Rn?n in terms of some hyperparameter vector d ? Rk
as the diagonal matrix, C(d) = diag(exp(d)). Under this representation, C(d) is necessar-
ily positive definite, so OPT2 can be written
Pm as an unconstrained minimization over the variables
d ? Rk . Specifically, let ?T (w) = ? i=1 log P y (i) | x(i) ; w denote the training logloss and
Pm
?
?H (w) = ? i=1 log P y?(i) | x
?(i) ; w the holdout logloss for a parameter vector w. Omitting the
dependence of C on d for notational convenience, we have the optimization problem
1 T
?
?
minimize ?H (w )
subject to w = arg min
w Cw + ?T (w) .
(OPT2?)
2
d?Rk
w?Rn
For any fixed setting of these hyperparameters, the objective function of OPT2? can be evaluated by
(1) using the hyperparameters d to determine the regularization matrix C, (2) solving OPT1 using
C to determine w? and (3) computing the holdout logloss using the parameters w? . In this next
section, we derive a method for computing the gradient of the objective function of OPT2? with
respect to the hyperparameters. Given both procedures for function and gradient evaluation, we may
apply standard gradient-based optimization (e.g., conjugate gradient or L-BFGS [30]) in order to
find a local optimum of the objective. In general, we observe that only a few iterations (? 5) are
usually sufficient to determine reasonable hyperparameters to low accuracy.
4
The hyperparameter gradient
Note that the optimization objective ?H (w? ) is a function of w? . In turn, w? is a function of the hyperparameters d, as implicitly defined by the gradient stationarity condition, Cw? + ?w ?T (w? ) =
0. To compute the hyperparameter gradient, we will use both of these facts.
4.1 Deriving the hyperparameter gradient
First, we apply the chain rule to the objective function of OPT2? to obtain
?d ?H (w? ) = JTd ?w ?H (w? )
(1)
where Jd is the n ? k Jacobian matrix whose (i, j)th entry is ?wi? /?dj . The term ?w ?H (w? ) is
simply the gradient of the holdout logloss evaluated at w? . For decomposable models, this may
be computed exactly via dynamic programming (e.g., the forward/backward algorithm for chainstructured models or the inside/outside algorithm for grammar-based models).
Next, we show how to compute the Jacobian matrix Jd . Recall that at the optimum of the smooth
unconstrained optimization problem OPT1, the partial derivative of the objective with respect to any
parameter must vanish. In particular, the partial derivative of 21 wT Cw + ?T (w) with respect to wi
vanishes when w = w? , so
?
0 = CTi w? +
?T (w? ),
(2)
?wi
where CTi denotes the ith row of the C matrix. Since (2) uniquely defines w? (as OPT1 is a
strictly convex optimization problem), we can use implicit differentiation to obtain the needed partial
derivatives. Specifically, we can differentiate both sides of (2) with respect to dj to obtain
X
n
n
X
? ?
? ?
? ?
? ?
0=
wp
Cip + Cip
wp +
?T (w? )
wp ,
(3)
?d
?d
?w
?w
?d
j
j
p
i
j
p=1
p=1
n
X
? ?
? ?
Cip +
= I{?(i)=j} wi? exp(dj ) +
?T (w? )
wp .
(4)
?w
?w
?d
p
i
j
p=1
Stacking (4) for all i ? {1, . . . , n} and j ? {1, . . . , k}, we obtain the equivalent matrix equation,
0 = B + (C + ?2w ?T (w? ))Jd
(5)
?
2
?
where B is the n ? k matrix whose (i, j)th element is I{?(i)=j} wi exp(dj ), and ?w ?T (w ) is the
Hessian of the training logloss evaluated at w? . Finally, solving these equations for Jd , we obtain
Jd = ?(C + ?2w ?T (w? ))?1 B.
(6)
4.2 Computing the hyperparameter gradient efficiently
In principle, one could simply use (6) to obtain the Jacobian matrix Jd directly. However, computing
the n ? n matrix (C + ?2w ?T (w? ))?1 is difficult. Computing the Hessian matrix ?2w ?T (w? ) in
a typical CLLM requires approximately n times the cost of a single logloss gradient evaluation.
Once the Hessian has been computed, typical matrix inversion routines take O(n3 ) time. Even
more problematic, the ?(n2 ) memory usage for storing the Hessian is prohibitive as typical loglinear models (e.g., in NLP) may have thousands or even millions of features. To deal with these
Algorithm 1: Gradient computation for hyperparameter selection.
Input:
Output:
m
(i) (i) m
?
training set T = (x(i) , y (i) ) i=1 , holdout set H = (?
x , y? ) i=1
current hyperparameters d ? Rk
hyperparameter gradient ?d ?H (w? )
1. Compute solution w? to OPT1 using regularization matrix C = diag(exp(d)).
2. Form the matrix B ? Rn?k such that (B)ij = I{?(i)=j} wi? exp(dj ).
3. Use conjugate gradient algorithm to solve the linear system,
(C + ?2w ?T (w? ))x = ?w ?H (w? ).
4. Return ?BT x.
Figure 1: Pseudocode for gradient computation
problems, we first explain why (C+?2w ?T (w? ))v for any arbitrary vector v ? Rn can be computed
in O(n) time, even though forming (C + ?2b w?T (w? ))?1 is expensive. Using this result, we then
describe an efficient procedure for computing the holdout hyperparameter gradient which avoids the
expensive Hessian computation and inversion steps of the direct method.
First, since C is diagonal, the product of C with any arbitrary vector v is trivially computable in
O(n) time. Second, although direct computation of the Hessian is inefficient in a generic log-linear
model, computing the product of the Hessian with v can be done quickly, using any of the following
techniques, listed in order of increasing implementation effort (and numerical precision):
1. Finite differencing. Use the following numerical approximation:
?w ?T (w? + rv) ? ?w ?t (w? )
.
(7)
?2w ?T (w? ) ? v = lim
r?0
r
2. Complex step derivative [24]. Use the following identity from complex analysis:
Im {?w ?T (w? + i ? rv)}
.
(8)
?2w ?T (w? ) ? v = lim
r?0
r
where Im {?} denotes the imaginary part of its complex argument (in this case, a vector).
Because there is no subtraction in the numerator of the right-hand expression, the complexstep derivative does not suffer from the numerical problems of the finite-differencing
method that result from cancellation. As a consequence, much smaller step sizes can be
used, allowing for greater accuracy.
3. Analytical computation. Given an existing O(n) algorithm for computing gradients analytically, define the differential operator
f (w + rv) ? f (w)
?
Rv {f (w)} = lim
=
f (w + rv)
,
(9)
r?0
r
?r
r=0
for which one can verify that Rv {?w ?T (w? )} = ?2w ?T (w? ) ? v. By applying standard rules for differential operators, Rv {?w ?T (w? )} can be computed recursively using
a modified version of the original gradient computation routine; see [31] for details.
Hessian-vector products for graphical models were previously used in the context of step-size adaptation for stochastic gradient descent [36]. In our experiments, we found that the simplest method,
finite-differencing, provided sufficient accuracy for our application.
Given the above procedure for computing matrix-vector products, we can now use the conjugate
gradient (CG) method to solve the matrix equation (5) to obtain Jd . Unlike direct methods for
solving linear systems Ax = b, CG is an iterative method which relies on the matrix A only
through matrix-vector products Av. In practice, few steps of the CG algorithm are generally needed
to find an approximate solution of a linear system with acceptable accuracy. Using CG in this
way amounts to solving k linear systems, one for each column of the Jd matrix. Unlike the direct
method of forming the (C + ?2w ?T (w? )) matrix and its inverse, solving the linear systems avoids
the expensive ?(n2 ) cost of Hessian computation and matrix inversion.
Nevertheless, even this approach for computing the Jacobian matrices still requires the solution
of multiple linear systems, which scales poorly when the number of hyperparameters k is large.
(b)
xj1
y2
xj2
?observed features?
xj1
xj2
?noise features?
???
???
yL
xjL
j ? {1, . . . , R}
???
xjL
j ? {R + 1, . . . , 40}
(c)
0.5
Proportion of incorrect labels
y1
0.55
grid
single
separate
grouped
0.45
0.4
Proportion of incorrect labels
(a)
0.35
0.3
0.25
0.2
0.15
0.1
0
10
20
30
40
Number of relevant features, R
grid
single
separate
grouped
0.5
0.45
0.4
0.35
0.3
0
20
40
60
80
Training set size, M
Figure 2: HMM simulation experiments. (a) State diagram of the HMM used in the simulations. (b)
Testing set performance when varying R, using M = 10. (c) Testing set performance when varying
M , using R = 5. In both (b) and (c), each point represents an average over 100 independent runs of
HMM training/holdout/testing set generation and CRF training and hyperparameter optimization.
However, we can do much better by reorganizing the computations in such a way that the Jacobian
matrix Jd is never explicitly required. In particular, substituting (6) into (1),
?d ?H (w? ) = ?BT (C + ?2w ?T (w? ))?1 ?w ?H (w? )
(10)
we observe that it suffices to solve the single linear system,
(C + ?2w ?T (w? ))x = ?w ?H (w? )
(11)
?
and then form ?d ?H (w ) = ?BT x. By organizing the computations this way, the number of least
squares problems that must be solved is substantially reduced from k to only one. A similar trick
was previously used for hyperparameter adaptation in SVMs [16] and kernel logistic regression [33].
Figure 1 shows a summary of our algorithm for hyperparameter gradient computation.1
5
Experiments
To test the effectiveness of our hyperparameter learning algorithm, we applied it to two tasks: a simulated sequence labeling task involving noisy features, and a real-world application of conditional
log-linear models to the biological problem of RNA secondary structure prediction.
Sequence labeling simulation. For our simulation test, we constructed a simple linear-chain
hidden Markov model (HMM) with binary-valued hidden nodes, yi ? {0, 1}.2 We associated 40
binary-valued features xji , j ? {1, . . . , 40} with each hidden state yi , including R ?relevant? observed features whose values were chosen based on yi , and (40 ? R) ?irrelevant? noise features
whose values were chosen to be either 0 or 1 with equal probability, independent of yi .3 Figure 2a
shows the graphical model representing the HMM. For each run, we used the HMM to simulate
training, holdout, and testing sets of M , 10, and 1000 sequences, respectively, each of length 10.
Next, we constructed a CRF based on an HMM model similar to that shown in Figure 2a in
which potentials were included for the initial node y1 , between each yi and yi+1 , and between
yi and each xji (including both the observed features and the noise features). We then performed
gradient-based hyperparameter learning using three different parameter-tying schemes: (a) all hyperparameters constrained to be equal, (b) separate hyperparameter groups for each parameter of the
model, and (c) transitions, observed features, and noise features each grouped together. Figure 2b
shows the performance of the CRF for each of the three parameter-tying gradient-based optimization
schemes, as well as the performance of scheme
(a) when using the standard
grid-search strategy of
trying regularization matrices CI for C ? . . . , 2?2 , 2?1 , 20 , 21 , 22 , . . . .
As seen in Figures 2b and 2c, the gradient-based procedure performed either as well as or better than a grid search for single hyperparameter models. Using either a single hyperparameter or
all separate hyperparameters generally gave similar results, with a slight tendency for the separate
1
In practice, roughly 50-100 iterations of CG were sufficient to obtain hyperparameter gradients, meaning
that the cost of running Algorithm 1 was approximately the same as the cost of solving OPT1 for a single fixed
setting of the hyperparameters. Roughly 3-5 line searches were sufficient to identify good hyperparameter
settings; assuming that each line search takes 2-4 times the cost of solving OPT1, the overall hyperparameter
learning procedure takes approximately 20 times the cost of solving OPT1 once.
2
For our HMM, we set initial state probabilities to 0.5 each, and used self-transition probabilities of 0.6.
3
Specifically, we drew each xji independently according to P (xji = v | yi = v) = 0.6, v ? {0, 1}.
uccguagaaggc
5?
3?
RNA sequence
.g
.a
.
a
.u
.
g
.
.a
g
c
. .
g
c
. .
u
.
. a
5?
3?
secondary
structure
(c)
| | |
(a)
0.8
CONTRAfold (our algorithm)
0.75
Mfold
0.7
(b)
hairpin loop lengths
helix closing base pairs
symmetric internal loop lengths
external loop lengths
bulge loop lengths
base pairings
internal loop asymmetry
explicit internal loop sizes
terminal mismatch interactions
single base pair stacking interactions
1 ? 1 internal loop nucleotides
single base bulge nucleotides
internal loop lengths
multi-branch loop lengths
helix stacking interactions
exp(di )
fold A
fold B
0.0832
0.456
0.780
0.0947
6.32
0.0151
0.338
0.401
0.451
2.03
2.01
7.95
4.24
6.90
12.8
6.39
132
50.2
71.0
104
139
120.
136
130.
1990
35.3
359
2750
12100
729
ViennaRNA
Sensitivity
Regularization group
0.65
PKNOTS
0.6
0.55
ILM
Pfold
0.5
single (AUC=0.6169, logloss=5916)
separate (AUC=0.6383, logloss=5763)
grouped (AUC=0.6406, logloss=5531)
0.45
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
Specificity
Figure 3: RNA secondary structure prediction. (a) An illustration of the secondary structure prediction task. (b) Grouped hyperparameters learned using our algorithm for each of the two folds. (c)
Performance comparison with state-of-the-art methods when using either a single hyperparameter
(the ?original? CONTRAfold), separate hyperparameters, or grouped hyperparameters.
hyperparameter model to overfit. Enforcing regularization groups, however, gave consistently lower
error rates, achieving an absolute reduction in generalization error over the next-best model of 6.7%,
corresponding to a relative reduction of 16.2%.
RNA secondary structure prediction. We also applied our framework to the problem of RNA
secondary structure prediction. Ribonucleic acid (RNA) molecules are long nucleic acid polymers
present in the cells of all living organisms. For many types of RNA, three-dimensional (or tertiary)
structure plays an important role in determining the RNA?s function. Here, we focus on the task
of predicting RNA secondary structure, i.e., the pattern of nucleotide base pairings which form the
two-dimensional scaffold upon which RNA tertiary structures assemble (see Figure 3a).
As a starting point, we used CONTRAfold [7], a current state-of-the-art secondary structure
prediction program based on CLLMs. In brief, the CONTRAfold program models RNA secondary
structures using a variant of stochastic context-free grammars (SCFGs) which incorporates features
chosen to closely match the energetic terms found in standard physics-based models of RNA structure. These features model the various types of loops that occur in RNAs (e.g., hairpin loops, bulge
loops, interior loops, etc.). To control overfitting, CONTRAfold uses flat L2 regularization. Here,
we modified the existing implementation to perform an ?outer? optimization loop based on our algorithm, and chose regularization groups either by (a) enforcing a single hyperparameter group, (b)
using separate groups for each parameter, or (c) grouping according to the type of each feature (e.g.,
all features for describing hairpin loop lengths were placed in a single regularization group).
For testing, we collected 151 RNA sequences from the Rfam database [13] for which
experimentally-determined secondary structures were already known. We divided this dataset into
two folds (denoted A and B) and performed two-fold cross-validation. Despite the small size of
the training set, the hyperparameters learned on each fold were nonetheless qualitatively similar,
indicating the robustness of the procedure (see Figure 3b). As expected, features with small regularization hyperparameters correspond to properties of RNAs which are known to contribute strongly
to the energetics of RNA secondary structure, whereas many of the features with larger regularization hyperparameters indicate structural properties whose presence/absence are either less correlated
with RNA secondary structure or sufficiently noisy that their parameters are difficult to determine
reliably from the training data.
We then compared the cross-validated performance of algorithm with state-of-the-art methods
(see Figure 3c).4 Using separate or grouped hyperparameters both gave increased sensitivity and
increased specificity compared to the original model, which was learned using a single regularization hyperparameter. Overall, the testing logloss (summed over the two folds) decreased by roughly
6.5% when using grouped hyperparameters and 2.6% when using multiple separate hyperparameters, while the estimated testing ROC area increased by roughly 3.8% and 3.4%, respectively.
6
Discussion and related work
In this work, we presented a gradient-based approach for hyperparameter learning based on minimizing logloss on a holdout set. While the use of cross-validation loss as a proxy for generalization
error is fairly natural, in many other supervised learning methods besides log-linear models, other
objective functions have been proposed for hyperparameter optimization. In SVMs, approaches
based on optimizing generalization bounds [4], such as the radius/margin-bound [15] or maximal
discrepancy criterion [2] have been proposed. Comparable generalization bounds are not generally
known for CRFs; even in SVMs, however, generalization bound-based methods empirically do not
outperform simpler methods based on optimizing five-fold cross-validation error [8].
A different method for dealing with hyperparameters, common in neural network modeling, is
the Bayesian approach of treating hyperparameters themselves as parameters in the model to be estimated. In an ideal Bayesian scheme, one does not perform hyperparameter or parameter inference,
but rather integrates over all possible hyperparameters and parameters in order to obtain a posterior
distribution over predicted outputs given the training data. This integration can be performed using
a hybrid Monte Carlo strategy [27, 38]. For the types of large-scale log-linear models we consider in
this paper, however, the computational expense of sampling-based strategies can be extremely high
due to slow convergence of MCMC techniques [26].
Empirical Bayesian (i.e., ML-II) strategies, such as Automatic Relevance Determination
(ARD) [22], take the intermediate approach of integrating over parameters to obtain the marginal
likelihood (known as the log evidence), which is then optimized with respect to the hyperparameters. Computing marginal likelihoods, however, can be quite costly, especially for log-linear models.
One method for doing this involves approximating the parameter posterior distribution as a Gaussian
centered at the posterior mode [22, 37]. In this strategy, however, the ?Occam factor? used for hyperparameter optimization still requires a Hessian computation, which does not scale well for log-linear
models. An alternate approach based on using a modification of expectation propagation (EP) [25]
was applied in the context of Bayesian CRFs [32] and later extended to graph-based semi-supervised
learning [14]. As described, however, inference in these models relies on non-traditional ?probitstyle? potentials for efficiency reasons, and known algorithms for inference in Bayesian CRFs are
limited to graphical models with fixed structure.
In contrast, our approach works broadly for a variety of log-linear models, including the
grammar-based models common in computational biology and natural language processing. Furthermore, our algorithm is simple and efficient, both conceptually and in practice: one iteratively
optimizes the parameters of a log-linear model using a fixed setting of the hyperparameters, and then
one changes the hyperparameters based on the holdout logloss gradient. The gradient computation
relies primarily on a simple conjugate gradient solver for linear systems, coupled with the ability
to compute Hessian-vector products (straightforward in any modern programming language that allows for operation overloading). As we demonstrated in the context of RNA secondary structure
prediction, gradient-based hyperparameter learning is a practical and effective method for tuning
hyperparameters when applied to large-scale log-linear models.
Finally we note that for neural networks, [9] and [5] proposed techniques for simultaneous optimization of hyperparameters and parameters; these results suggest that similar procedures for faster
hyperparameter learning that do not require a doubly-nested optimization may be possible.
References
[1] L. Andersen, J. Larsen, L. Hansen, and M. Hintz-Madsen. Adaptive regularization of neural classifiers.
In NNSP, 1997.
[2] D. Anguita, S. Ridella, F. Rivieccio, and R. Zunino. Hyperparameter design criteria for support vector
classifiers. Neurocomputing, 55:109?134, 2003.
4
Following [7], we used the maximum expected accuracy algorithm for decoding, which returns a set of
candidates parses reflecting different trade-offs between sensitivity (proportion of true base-pairs called) and
specificity (proportion of called base-pairs which are correct).
[3] Y. Bengio. Gradient-based optimization of hyperparameters. Neural Computation, 12:1889?1900, 2000.
[4] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector
machines. Machine Learning, 46(1?3):131?159, 2002.
[5] D. Chen and M. Hagan. Optimal use of regularization and cross-validation in neural network modeling.
In IJCNN, 1999.
[6] C. B. Do, S. S. Gross, and S. Batzoglou. CONTRAlign: discriminative training for protein sequence
alignment. In RECOMB, pages 160?174, 2006.
[7] C. B. Do, D. A. Woods, and S. Batzoglou. CONTRAfold: RNA secondary structure prediction without
physics-based models. Bioinformatics, 22(14):e90?e98, 2006.
[8] K. Duan, S. S. Keerthi, and A.N. Poo. Evaluation of simple performance measures for tuning SVM
hyperparameters. Neurocomputing, 51(4):41?59, 2003.
[9] R. Eigenmann and J. A. Nossek. Gradient based adaptive regularization. In NNSP, pages 87?94, 1999.
[10] T. Glasmachers and C. Igel. Gradient-based adaptation of general Gaussian kernels. Neural Comp.,
17(10):2099?2105, 2005.
[11] A. Globerson, T. Y. Koo, X. Carreras, and M. Collins. Exponentiated gradient algorithms for log-linear
structured prediction. In ICML, pages 305?312, 2007.
[12] C. Goutte and J. Larsen. Adaptive regularization of neural networks using conjugate gradient. In ICASSP,
1998.
[13] S. Griffiths-Jones, S. Moxon, M. Marshall, A. Khanna, S. R. Eddy, and A. Bateman. Rfam: annotating
non-coding RNAs in complete genomes. Nucleic Acids Res, 33:D121?D124, 2005.
[14] A. Kapoor, Y. Qi, H. Ahn, and R. W. Picard. Hyperparameter and kernel learning for graph based semisupervised classification. In NIPS, pages 627?634, 2006.
[15] S. S. Keerthi. Efficient tuning of SVM hyperparameters using radius/margin bound and iterative algorithms. IEEE Transaction on Neural Networks, 13(5):1225?1229, 2002.
[16] S. S. Keerthi, V. Sindhwani, and O. Chapelle. An efficient method for gradient-based adaptation of
hyperparameters in SVM models. In NIPS, 2007.
[17] K. Kobayashi, D. Kitakoshi, and R. Nakano. Yet faster method to optimize SVR hyperparameters based
on minimizing cross-validation error. In IJCNN, volume 2, pages 871?876, 2005.
[18] K. Kobayashi and R. Nakano. Faster optimization of SVR hyperparameters based on minimizing crossvalidation error. In IEEE Conference on Cybernetics and Intelligent Systems, 2004.
[19] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: probabilistic models for segmenting
and labeling sequence data. In ICML 18, pages 282?289, 2001.
[20] J. Larsen, L. K. Hansen, C. Svarer, and M. Ohlsson. Design and regularization of neural networks: the
optimal use of a validation set. In NNSP, 1996.
[21] J. Larsen, C. Svarer, L. N. Andersen, and L. K. Hansen. Adaptive regularization in neural network
modeling. In Neural Networks: Tricks of the Trade, pages 113?132, 1996.
[22] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415?447, 1992.
[23] D. J. C. MacKay and R. Takeuchi. Interpolation models with multiple hyperparameters. Statistics and
Computing, 8:15?23, 1998.
[24] J. R. R. A. Martins, P. Sturdza, and J. J. Alonso. The complex-step derivative approximation. ACM Trans.
Math. Softw., 29(3):245?262, 2003.
[25] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI, volume 17, pages
362?369, 2001.
[26] I. Murray and Z. Ghahramani. Bayesian learning in undirected graphical models: approximate MCMC
algorithms. In UAI, pages 392?399, 2004.
[27] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996.
[28] A. Y. Ng. Preventing overfitting of cross-validation data. In ICML, pages 245?253, 1997.
[29] A. Y. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. In ICML, 2004.
[30] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.
[31] B. A. Pearlmutter. Fast exact multiplication by the Hessian. Neural Comp, 6(1):147?160, 1994.
[32] Y. Qi, M. Szummer, and T. P. Minka. Bayesian conditional random fields. In AISTATS, 2005.
[33] M. Seeger. Cross-validation optimization for large scale hierarchical classification kernel methods. In
NIPS, 2007.
[34] F. Sha and F. Pereira. Shallow parsing with conditional random fields. In NAACL, pages 134?141, 2003.
[35] S. Sundararajan and S. S. Keerthi. Predictive approaches for choosing hyperparameters in Gaussian
processes. Neural Comp., 13(5):1103?1118, 2001.
[36] S. V. N. Vishwanathan, N. N. Schraudolph, M. W. Schmidt, and K. P. Murphy. Accelerated training of
conditional random fields with stochastic gradient methods. In ICML, pages 969?976, 2006.
[37] M. Wellings and S. Parise. Bayesian random fields: the Bethe-Laplace approximation. In ICML, 2006.
[38] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 20(12):1342?1351, 1998.
[39] X. Zhang and W. S. Lee. Hyperparameter learning for graph based semi-supervised learning algorithms.
In NIPS, 2007.
| 3286 |@word version:1 inversion:3 bigram:2 proportion:4 simulation:6 covariance:1 recursively:1 reduction:2 initial:2 selecting:1 tuned:2 imaginary:1 existing:2 current:2 yet:3 must:3 parsing:3 written:1 numerical:4 treating:1 v:2 intelligence:1 prohibitive:1 parameterization:2 isotropic:1 scaffold:1 nnsp:3 ith:1 mccallum:1 tertiary:2 prespecified:1 provides:2 math:1 node:2 contribute:1 simpler:1 zhang:1 five:1 constructed:2 direct:4 differential:2 pairing:2 incorrect:2 doubly:1 inside:1 expected:2 xji:4 roughly:4 themselves:1 multi:1 terminal:1 duan:1 little:1 reorganizing:1 solver:1 increasing:1 becomes:1 provided:1 tying:2 minimizes:1 substantially:1 differentiation:2 exactly:1 scaled:1 classifier:2 control:1 normally:1 segmenting:1 positive:2 kobayashi:2 local:1 consequence:1 despite:1 koo:1 interpolation:2 approximately:3 chose:1 limited:1 igel:1 unique:1 practical:2 globerson:1 testing:7 practice:4 definite:2 procedure:10 area:1 empirical:1 word:5 integrating:1 griffith:1 specificity:3 suggest:1 batzoglou:2 protein:1 convenience:1 interior:1 selection:3 operator:2 svr:2 context:4 applying:1 optimize:1 equivalent:1 map:1 demonstrated:1 crfs:5 poo:1 straightforward:1 attention:1 starting:1 independently:1 convex:1 williams:1 decomposable:1 unstructured:1 rule:2 importantly:1 deriving:1 jtd:1 laplace:1 limiting:1 play:1 xjl:2 exact:1 programming:2 us:1 designing:1 trick:3 element:1 expensive:3 lay:1 hagan:1 mukherjee:1 labeled:1 database:1 observed:4 role:1 ep:1 solved:1 parameterize:1 thousand:1 trade:2 gross:1 vanishes:1 complexity:2 dynamic:1 parise:1 solving:9 predictive:1 upon:1 efficiency:1 matchings:1 icassp:1 various:1 train:1 distinct:2 fast:1 effective:2 describe:3 opt2:7 monte:1 labeling:5 choosing:4 outside:1 whose:6 quite:2 stanford:3 valued:3 posed:1 solve:3 larger:1 annotating:1 grammar:3 ability:1 statistic:1 itself:1 noisy:2 differentiate:1 sequence:12 advantage:1 analytical:1 interaction:3 product:6 maximal:1 adaptation:5 relevant:2 loop:15 kapoor:1 organizing:1 poorly:1 inputoutput:1 crossvalidation:1 xj2:2 convergence:1 optimum:2 asymmetry:1 object:2 derive:3 andrew:1 ard:1 ij:1 c:1 involves:2 indicate:1 predicted:1 radius:2 closely:1 correct:1 stochastic:3 centered:1 glasmachers:1 require:1 suffices:1 generalization:6 preliminary:1 polymer:1 biological:1 im:2 strictly:1 e98:1 sufficiently:1 wright:1 exp:8 mapping:3 substituting:1 achieves:1 estimation:1 integrates:1 combinatorial:1 label:3 hansen:3 grouped:8 minimization:2 offs:1 gaussian:8 rna:21 modified:2 rather:1 arose:1 varying:4 ax:1 focus:1 validated:1 notational:1 ily:1 consistently:1 likelihood:2 contrast:1 seeger:1 cg:5 posteriori:1 inference:5 typically:3 bt:3 hidden:3 arg:2 classification:8 flexible:1 overall:2 denoted:1 constrained:2 art:3 summed:1 fairly:1 integration:1 field:7 once:2 never:1 equal:2 ng:3 sampling:1 softw:1 biology:2 represents:1 jones:1 icml:6 bateman:1 discrepancy:1 intelligent:1 few:2 primarily:1 modern:1 ve:1 neurocomputing:2 individual:2 murphy:1 keerthi:4 stationarity:1 cip:3 picard:1 evaluation:3 alignment:1 regularizers:2 chain:2 predefined:1 logloss:15 nossek:1 partial:3 necessary:1 nucleotide:3 tree:2 desired:1 re:1 increased:3 column:1 instance:2 modeling:4 marshall:1 moxon:1 stacking:3 cost:6 subset:1 entry:1 chooses:1 sensitivity:3 sequel:1 probabilistic:3 yl:1 physic:2 decoding:1 lee:1 together:1 quickly:2 bulge:3 na:1 andersen:2 external:1 derivative:6 inefficient:1 return:2 potential:2 bfgs:1 ilm:1 coding:1 includes:1 explicitly:1 depends:1 performed:4 chuong:1 later:1 doing:1 minimize:2 square:1 takeuchi:1 accuracy:6 acid:3 efficiently:1 yield:1 identify:1 correspond:1 conceptually:1 bayesian:12 ohlsson:1 carlo:1 comp:3 cybernetics:1 explain:1 simultaneous:1 nonetheless:1 larsen:4 minka:2 naturally:1 associated:1 di:1 gain:1 holdout:14 dataset:1 recall:1 lim:3 eddy:1 routine:2 sophisticated:2 reflecting:1 supervised:5 evaluated:3 though:1 done:1 strongly:1 furthermore:2 governing:1 implicit:2 overfit:1 sheng:1 hand:1 parse:1 propagation:2 defines:1 logistic:1 mode:1 khanna:1 perhaps:1 grows:1 semisupervised:1 usage:1 omitting:1 xj1:2 contain:1 true:1 verify:1 y2:1 naacl:1 regularization:35 hence:1 analytically:1 symmetric:1 wp:4 iteratively:1 neal:1 deal:1 numerator:1 self:1 uniquely:1 auc:3 criterion:2 trying:2 crf:3 demonstrate:1 complete:1 pearlmutter:1 ribonucleic:1 l1:2 meaning:1 recently:1 common:2 specialized:1 pseudocode:1 empirically:1 exponentially:2 volume:2 million:1 slight:1 organism:1 opt1:9 ridella:1 sundararajan:1 significant:1 tuning:4 unconstrained:2 grid:6 pm:2 trivially:1 automatic:1 closing:1 cancellation:1 language:3 dj:5 rfam:2 chapelle:2 ahn:1 etc:1 base:7 carreras:1 posterior:3 madsen:1 optimizing:4 irrelevant:1 optimizes:1 binary:4 yi:8 seen:1 greater:1 managing:1 subtraction:1 determine:4 living:1 semi:2 rv:7 multiple:13 branch:1 ii:1 stem:1 smooth:1 match:1 determination:1 faster:3 cross:12 long:1 schraudolph:1 divided:1 energetics:1 controlled:1 qi:2 prediction:12 scalable:1 regression:1 involving:1 variant:1 expectation:2 iteration:2 kernel:5 cell:1 whereas:2 decreased:1 diagram:1 unlike:2 subject:1 undirected:1 incorporates:1 lafferty:1 effectiveness:2 call:1 structural:1 presence:1 ideal:1 intermediate:1 bengio:1 variety:1 gave:3 computable:1 motivated:1 expression:1 effort:1 penalty:5 energetic:1 suffer:1 hessian:12 generally:5 listed:1 amount:2 chuan:1 ang:1 svms:3 simplest:1 reduced:1 outperform:1 exist:1 problematic:1 estimated:2 broadly:1 hyperparameter:51 group:11 nevertheless:1 achieving:1 drawn:2 hairpin:3 prevent:2 backward:1 nocedal:1 graph:3 wood:1 realworld:1 inverse:2 parameterized:1 run:2 contralign:1 reasonable:1 acceptable:1 comparable:1 bound:5 fold:8 encountered:1 assemble:1 strength:1 occur:1 ijcnn:2 vishwanathan:1 n3:1 flat:3 scfgs:1 bousquet:1 simulate:1 argument:1 min:2 chuongdo:1 extremely:1 martin:1 department:1 structured:6 according:2 alternate:1 combination:1 conjugate:5 smaller:1 wi:6 shallow:1 modification:1 contrafold:6 equation:3 goutte:1 previously:2 describing:2 turn:1 needed:3 tractable:1 operation:1 apply:2 observe:2 hierarchical:1 appropriate:1 generic:2 occurrence:2 schmidt:1 robustness:1 jd:9 original:3 denotes:2 running:1 nlp:3 graphical:4 nakano:2 ghahramani:1 especially:1 murray:1 approximating:1 objective:7 already:1 marginal:2 strategy:10 costly:1 dependence:1 sha:1 diagonal:2 loglinear:1 traditional:1 gradient:41 cw:5 separate:11 simulated:1 hmm:8 outer:1 alonso:1 e90:1 barber:1 collected:1 reason:1 enforcing:2 assuming:1 length:8 besides:1 zunino:1 illustration:1 rotational:1 minimizing:3 equivalently:1 difficult:2 unfortunately:1 differencing:3 expense:1 implementation:2 reliably:1 design:2 unknown:1 perform:2 allowing:2 av:1 nucleic:2 markov:1 finite:3 descent:1 situation:1 extended:1 y1:2 rn:7 arbitrary:2 community:1 pair:6 required:2 cast:1 optimized:1 learned:4 boost:1 nip:4 trans:1 usually:4 pattern:2 mismatch:1 challenge:1 program:2 reliable:2 including:4 memory:1 suitable:1 natural:3 rely:3 regularized:2 difficulty:1 predicting:1 hybrid:1 representing:1 scheme:4 brief:1 coupled:1 prior:4 l2:3 multiplication:1 determining:2 relative:1 loss:3 par:1 generation:1 recomb:1 validation:12 incurred:1 sufficient:4 proxy:1 principle:1 helix:2 storing:1 occam:1 row:1 summary:1 placed:2 free:2 infeasible:1 side:1 exponentiated:1 absolute:1 world:3 avoids:2 transition:2 genome:1 preventing:1 forward:1 qualitatively:1 adaptive:4 transaction:2 approximate:3 implicitly:1 dealing:1 ml:1 overfitting:6 uai:2 assumed:1 discriminative:1 search:6 iterative:2 why:1 bethe:1 molecule:1 ca:1 expansion:1 complex:5 diag:2 aistats:1 noise:5 hyperparameters:51 n2:2 roc:1 slow:1 foo:1 precision:2 pereira:2 explicit:1 candidate:3 anguita:1 vanish:1 jacobian:5 rk:7 specific:1 svm:5 concern:1 grouping:1 evidence:1 vapnik:1 overloading:1 drew:1 ci:2 margin:2 chen:1 simply:2 forming:2 partially:1 sindhwani:1 springer:2 nested:1 mackay:2 relies:3 acm:1 cti:2 conditional:10 identity:2 shared:1 absence:1 experimentally:1 change:1 included:1 specifically:3 typical:3 determined:1 wt:2 called:2 svarer:2 secondary:15 invariance:1 tendency:1 indicating:1 formally:1 internal:5 support:5 szummer:1 arises:1 noisier:1 collins:1 relevance:1 bioinformatics:1 accelerated:1 mcmc:2 correlated:1 |
2,521 | 3,287 | A Probabilistic Model for Generating
Realistic Lip Movements from Speech
Gwenn Englebienne
School of Computer Science
University of Manchester
ge@cs.man.ac.uk
Tim F. Cootes
Imaging Science and Biomedical Engineering
University of Manchester
Tim.Cootes@manchester.ac.uk
Magnus Rattray
School of Computer Science
University of Manchester
magnus.rattray@manchester.ac.uk
Abstract
The present work aims to model the correspondence between facial motion
and speech. The face and sound are modelled separately, with phonemes
being the link between both. We propose a sequential model and evaluate
its suitability for the generation of the facial animation from a sequence of
phonemes, which we obtain from speech. We evaluate the results both by
computing the error between generated sequences and real video, as well as
with a rigorous double-blind test with human subjects. Experiments show
that our model compares favourably to other existing methods and that
the sequences generated are comparable to real video sequences.
1
Introduction
Generative systems that model the relationship between face and speech offer a wide range
of exciting prospects. Models combining speech and face information have been shown to
improve automatic speech recognition [4]. Conversely, generating video-realistic animated
faces from speech has immediate applications to the games and movie industries. There is
a strong correlation between lip movements and speech [7,10], and there have been multiple
attempts at generating an animated face to match some given speech realistically [2,3,9,13].
Studies have indicated that speech might be informative not only of lip movement but also
of movement in the upper regions of the face [3]. Incorporating speech therefore seems
crucial to the generation of true-to-life animated faces.
Our goal is to build a generative probabilistic model, capable of generating realistic facial
animations in real time, given speech. We first use an Active Appearance Model (AAM [6])
to extract features from the video frames. The AAM itself is generative and allows us to
produce video-realistic frames from the features. We then use a Hidden Markov Model
(HMM [12]) to align phoneme labels to the audio stream of video sequences, and use this
information to label the corresponding video frames. We propose a model which, when
trained on these labelled video frames, is capable of generating new, realistic video from
unseen phoneme sequences. Our model is a modification of Switching Linear Dynamical
Systems (SLDS [1,15]) and we show that it performs better at generation than other existing
models. We compare its performance to two previously proposed models by comparing the
sequences they generate to a golden standard, features from real video sequences, and by
asking volunteers to select the ?real? video in a forced-choice test.
The results of human evaluation of our generated sequences are extremely encouraging. Our
system performs well with any speech, and since it can easily handle real-time generation
of the facial animation, it brings a realistic-looking, talking avatar within reach.
1
2
The Data
We used sequences from the freely available on-line news broadcast Democracy Now! The
show is broadcast every weekday in a high quality MP4 format, and as such constitutes
a constant source of new data. The text transcripts are available on-line, thus greatly
facilitating the training of a speech recognition system. We manually extracted short video
sequences of the news presenter talking (removing any inserts, telephone interviews, etc.),
cutting at ?natural? positions in the stream, viz. during pauses for breath and silences. The
sequences are all of the same person, albeit on different days within a period of slightly more
than a month. There was no reason to restrict the data to a single person, other than the
difficulty to obtain sequences of similar quality from other sources.
All usable sequences were extracted from the data, that is, those where the face of the speaker
was visible and the sound was not corrupted by external sound sources. The sequences do
include hesitations, corrections, incomplete words, noticeable fatigue, breath, swallowing,
etc. The speaker visibly makes an effort to speak clearly, but obviously makes no effort to
reduce head motion or facial expression, and the data is hence probably as representative
of the problem as can be hoped for.
In total, sequences totalling 1 hour and 7 minutes of video were extracted and annotated.1
The data was split into independent training and test sets for a 10-fold cross validation,
based on the number of sequences in each set (rather than the total amount of data). This
resulted in training sets of an average of 60 minutes of data, and test sets of approximately
7 min. All models evaluated here were trained and tested on the same data sets.
Sound features and labelling. The sequences are
split into an audio and a video stream, which are
treated separately (see Figure 1). From the sound
stream, we extract Mel Frequency Cepstrum Coefficients (MFCC) at a rate of 100Hz, using tools from
the HMM Tool Kit [16], resulting in 13-dimensional
feature vectors. We train a HMM on these MFCC
features, and use it to align phonetic labels to the
sound. This is an easier task than unrestricted speech
recognition, and is done satisfactorily by a simple
HMM with monophones as hidden states, where mixtures of Gaussian distributions model the emission
densities. The sound samples are labelled with the Figure 1: Combining sound and face
Viterbi path through the HMM that was ?unrolled?
with the phonetic transcription of the text.
The labels obtained from the sound stream are then used to label the corresponding video
frames. The difference in rate (the video is processed at 29.97 frames per second while
MFCC coefficients are computed at 100 Hz) is handled by simple voting: each video frame
is labelled with the phoneme that labels most of the corresponding sound frames.
Face features. The feature extraction for the video was done using an Active Appearance
Model (AAM [6]). The AAM represents both the shape and the texture of an object in an
image. The shape of the lower part of the face is represented by the location of 23 points on
key features on the eyes, mouth and jaw-line (see Figure 2). Given the position of the points
in a set of training images, we align them to a common co-ordinate frame and apply PCA to
learn a low-dimensional linear model capturing the shape change [5]. The intensities across
the region in each example are warped to the mean shape using a simple triangulation of
the region (Fig 2), and PCA applied to the vectors of intensities sampled from each image.
This leads to a low-dimensional linear model of the intensities in the mean frame. Efficient
algorithms exist for matching such models to new images [6]. By combining shape and
intensity model together, a wide range of convincing synthetic faces can be generated [6]. In
this case a 32 parameter model proves sufficient. This is closely related to eigenfaces [14] but
gives far better results as shape and texture are decoupled [8]. Since the AAM parameters
1
The data is publicly available at http://www.cs.manchester.ac.uk/ai/public/demnow.
2
Figure 2: The face was modelled with an AAM. A set of training images is manually labelled as
shown in the two leftmost images. A statistical model of the shape is then combined with a model
of the texture within the triangles between feature points. Applying the model to a new image
results in a vector of coefficients, which can be used to reconstruct the original image.
are a low-dimensional linear projection of the original object, projecting those parameters
back to the high-dimensional space allows us to reconstruct the modelled part of the original
image.
3
Modelling the dynamics of the face
We model the face using only phoneme labels to capture the shared information between
speech and face. We use 41 distinct phoneme labels, two of which are reserved for breath
and silence, the rest being the generally accepted phonemes in the English language. Most
earlier techniques that use discrete labels to generate synthetic video sequences use some
form of smooth interpolation between key frames [2, 9]. This requires finding the correct
key frames, and lacks the flexibility of a probabilistic formulation. Brand uses a HMM
where Gaussian distributions are fitted to a concatenation of the data features and ?delta?
features [3]. Since the distribution is fitted to both the features and the difference between
features, the resulting ?distribution? cannot be sampled, as it would result in non-sensical
mismatch between features and delta features. It is therefore not genuinely generative and
obtaining new sequences from the model requires solving an optimisation problem.
Under Brand?s approach, new sequences are obtained by finding the most likely sequence of
observations for a set of labels. This is done by setting the first derivative of the likelihood
with respect to the observations to zero, resulting in a set of linear equations involving, at
s
each time t, the observation yts and the previous observation yt?1
. Such a set of linear
equations can be solved relatively efficiently thanks to its block-band-diagonal structure.
This requires the storage of O(d2 T ) elements and O(d3 T ) time to solve, where d is twice
the dimensionality of the face features and T is the number of frames in a sequence. This
becomes non-trivial for sequences exceeding a few tens of seconds. More important, however,
is that this cannot be done in real time, as the last label of the sequence must be known
before the first observation can be computed.
In this work, we consider more standard probabilistic models of sequential data, which are
genuinely generative. These models are shown to outperform Brand?s approach for the
generation of realistic sequences.
Switching Linear Dynamical Systems. Before introducing the SLDS, we introduce
some notational conventions. We have a set of S video sequences, which we index with
s ? [1 . . . S]. The feature vector of the frame at time t in the video sequence s is indicated
as yts ? Rd , and the complete set of feature vectors for that sequence is denoted as {y}T1 s ,
where Ts is the length of the sequence. Continuous hidden variables are indicated as x and
discrete state labels are indicated with ?, where ? ? [1 . . . ?].
In an SLDS, the sequence of observations {y}T1 s is modelled as being a noisy version of a
hidden sequence {x}T1 s which depends on a sequence of discrete labels {?}T1 s . Each state ? is
associated with a transition matrix A? and with a distribution for the output noise v and the
process noise w, such that yts = B?ts xst +vt , xs1 ? N (??1s , ??1s ) and xst = A?ts xst?1 +? ?ts +wt
for 2 6 t 6 Ts . Both the output noise vt and the process noise wt are normally distributed
with zero mean; vt ? N (0, R?ts ) and wt ? N (0, Q?ts ). The states in our application are
3
...
? t?2
? t?1
?t
? t+1
? t+2
xt?2
xt?1
xt
xt+1
xt+2
yt?2
yt?1
yt
yt+1
yt+2
...
...
(a)
? t?2
? t?1
?t
? t+1
? t+2
?t?2
?t?1
?t
?t+1
?t+2
yt?2
yt?1
yt
yt+1
yt+2
...
(b)
Figure 3: Graphical representation of the different models: figure (a) depicts the dependencies in
an SLDS when the labels are known and (b) represents our proposed DPDS, where we assume the
process is noiseless. The circles are discrete and the squares are multivariate continuous quantities.
The shaded elements are observed and the random variables in the dashed box are conditioned on
the quantities outside of it.
the phonemes, which are obtained from the sound. Notice that in general, when the state
labels are not known, computing the likelihood in an SLDS is intractable as it requires
the enumeration of all possible state sequences, which is exponential in T [1]. In our case,
however, the state label ?ts of each frame is known from the sound and the likelihood can
be computed with the same algorithm as for a standard Linear Dynamical Systems (LDS),
which is linear in T . Parameter optimisation can therefore be carried out efficiently with
a standard EM algorithm. Also note that neither SLDS or LDS are commonly described
with the explicit state bias ? ?ts , as this can easily be emulated by augmenting each latent
vector xst with a 1 and incorporating ? ?ts into A?ts . However, doing so prevents us from
using a diagonal matrix for A?ts , and experience has shown that the state mean is crucial
to good prediction while the lack of sufficient data or, as is the case with our data, the `
a
priori known approximate independence of the data dimensions may make the reduction of
the complexity of A?ts , Q?ts and R?ts warranted.
In this form, the model is over-parametrised; it can be simplified without any loss of generality either by fixing Q?ts to the identity matrix I or, if there is no reason to use a different
dimensionality for x and y, by setting B?ts = I. We did the latter, as this makes the
resulting {x}T1 easier to interpret and compare across the different models we evaluate here.
We trained a SLDS by maximum likelihood and used the model to generate new sequences
of face observations for given sequences of labels. This was done by computing the most
likely sequence of observations for the given set of labels. An in-depth evaluation of the
trained SLDS model, when used to generate new video sequences, is given in section 4. This
evaluation shows that SLDS is overly flexible: it appears to explain the data well and results
in a very high likelihood, but does a poor job at generating realistic new sequences.
Deterministic Process Dynamical System. We reduced the complexity of the model
by simplifying its covariance structure. If we set the output noise vt of the SLDS to zero,
leaving only process noise, we obtain the autoregressive hidden Markov model [11]. This
model has the advantage that it can be trained using an EM algorithm when the state labels
are unknown, but we find that it performs very poorly at data generation. If we set the
process noise wt = 0, however, then we obtain a more useful model. The complete hidden
sequence {x}T1 is then determined exactly by the labels {?}T1 . The log-likelihood p({y}|{?})
is given by
log p({y}|{x}) = ? 12
S h
X
s
s
log |??1s | + (y1s ? xs1 )? ??1
?1s (y1 ? x1 )+
s=1
Ts
X
t=2
i
s
s
log |R?ts | + (yts ? xst )? R?1
(y
?
x
)
+
dT
log
2?
s
s
t
t
?t
(1)
where xs1 = ??1s and xst = A?ts xst?1 + ? ?ts for t > 1. We will now refer to this model as the
Deterministic Process Dynamical System (DPDS, see Figure 3). In our implementation we
4
(a) Mean L1 distance
(b) RMS Error
(c) Mean L? distance
(d) Log-likelihood
Figure 4: Comparison of the multiple models on the test data of 10-fold cross-validation. Each
plot shows the mean error of the generated data with respect to the real data over the 10 folds.
The error bars span the 95% confidence interval of the true error.
model all matrices R?ts , ??ts as diagonal, and further reduce the complexity by sharing the
output noise covariance over all states. It is reasonable to assume this because the features
are the result of PCA and are therefore uncorrelated.
Since in this case the labels ?ts are known, equation (1) does not contain any hidden variables. Applying EM is therefore not necessary. Deriving a closed-form solution for the ML
estimates of the parameters, however, results in solving polynomial equations of the order
Ts , because xst = f (A?2s ? ? ? A?ts ). An efficient solution is to use a gradient-based method.
The log-likelihood of a sequence is a sum of scaled quadratic terms of (yts ? xst ), where
xst = f ({?}t1 ). The log-likelihood must thus be computed by a forward iteration over all
time steps t using xst?1 to compute xst . The gradients of the likelihood with respect to A?ts
can be computed numerically in a similar fashion, by applying the chain rule iteratively at
each time step and storing the result for the next step. The same could be done for other
parameters, however for given values of A?ts , the values of ??ts , ? ?ts and R?ts that maximise
the likelihood can be computed exactly by solving a set of linear equations. This markedly
improves the rate of convergence. An algorithm for the computation of the gradients with
respect to A?ts and the exact evaluation of the other parameters is given in Appendix A.
Sequence generation. Since all models parametrise the distribution of the data, we can
sample them to generate new observation sequences. In order to evaluate the performance
of the models and compare it to Brand?s model, it is however useful to generate the most
likely sequence of observation features for a sequence of labels with the features of the
corresponding real video sequence.
For both SLDS (when B?ts = I) and the DPDS, the mean for a given sequence of labels
{?}T1 is found by a forward iteration starting with y
?1 = ??1s and iterating for t > 1 with
y
?t = A?ts y
?t?1 + ? ?ts . This does not require the storage of the complete sequence in memory
as the current observation only depends on the previous one. In setups where artificial
speech is generated, the video sequence can therefore be generated at the same time as the
audio sequence and without length limitations, with O(d) space and O(dT ) time complexity,
where d is the dimensionality of the face features (without delta features).
4
Evaluation against real video
We evaluated the models in two ways: (1) by computing the error between generated face
features and a ground truth (the features of real video), and (2) by asking human subjects
to rate how they perceived the sequences. Both tests were done on the same real-world
data, but partitioned differently: the comparison to the ground truth was done using 10fold cross-validation, while the test on humans was done using a single partitioning, due to
the limited availability of unbiased test subjects.
Test error and likelihood. In order to test the models against the ground truth, we
use the sound to align the labels to the video and generate the corresponding face features.
We use 10-fold cross validation and evaluate the performance of the models using different
metrics, see Figure 4. Plot (a) shows, for different models, the L1 error between the face
5
A
Brand
Brand
Brand
DPDS
DPDS
reality
prefer
A
5
4
36
29
60
58
undecided
7
7
21
11
5
5
prefer
B
54
55
9
26
1
3
B
DPDS
reality
SLDS
reality
SLDS
SLDS
DPDS ? reality ? Brand ? SLDS
Table 1: Raw results of the Psychophysical test conducted by human volunteers. Every model is
compared to every other model; the order in which models are listed in this table is meaningless.
See text for details.
features generated for the test sound sequences and the face features extracted from the
real video. We compared the sequences generated by DPDS, Brand?s model and SLDS to
the most likely observations under a standard HMM. This last model just generates the
mean face for each phoneme, hence resulting in very unnatural sequences. It illustrates how
an obviously incorrect model nevertheless performs very similarly to the other models in
terms of generation error. Plots (b) and (c) respectively show the corresponding Root Mean
Square (RMS) and L? error. We can see that, except for the SLDS which performs worse
than the other methods in terms of L1 , RMS and L? error, the generation error for the
models considered, under all metrics, is consistently not statistically significantly different.
In terms of the log-likelihood of the test data under the different models, the opposite is true:
the traditional HMM and DPDS clearly perform worst, while SLDS performs dramatically
better. The model with the highest likelihood generates the sequences with the largest error.
The likelihood under Brand?s model cannot be compared directly as it has double the amount
of features. These results notwithstanding, great differences can be seen in the quality of the
generated video sequences, and the models giving the lowest error or the highest likelihood
are far from generating the most realistic sequences. We have therefore performed a rigorous
test where volunteers were asked to evaluate the quality of the sequences.
Psychophysical test. For this experiment, we trained the models on a training set of 642
sequences of an average of 5 seconds each. We then labelled the sequences in our test set,
which consists of 80 sequences and 436 seconds of video from sound with phonemes. These
are substantial amounts of data, showing the face in a wide variety of positions.
We set up a web-based test, where 33 volunteers compared 12 pairs of video sequences.
All video sequences had original sound, but the video stream was generated by any one
of four methods: (1) from the face features extracted from the corresponding real video,
(2) from SLDS, (3) from Brand?s model and (4) from DPDS. A pool of 80 sequences was
generated from previously unseen videos. The 12 pairs were chosen such that each generation
method was pitted against each other generation method twice (once on each side, left or
right, in order to eliminate bias towards a particular side) in random order. For each pair,
corresponding sequences were chosen from the respective pools at random. The volunteers
were only told that the sequences were either real or artificial, and were asked to either
select the real video or to indicate that they could not decide. The test is kept available
on-line for validation at http://www.cs.manchester.ac.uk/ai/public/dpdseval.
The results are shown in Table 1. The first row, e.g., shows that when comparing Brand?s
model with the DPDS, people thought that the sequence generated with the former model
was real in 5 cases, could not make up their mind in 7 cases, and thought the sequence
generated with DPDS was real in 54 instances. These results indicate that DPDS performs
quite well at generation, clearly much better than the two other models. Note however
that this test discriminates the models very harshly. Despite the strong down-voting of
Brand?s model in this test, the sequences generated with that model do not look all that
bad. They are over-smoothed, however, and humans appear to be very sensitive to that.
Also remember that Brand?s model is the only model considered here with a closed form
solution for the parameter estimation given the labels. Contrary to the other two models,
it can easily be trained in the absence of labelling, using an EM algorithm.
6
In order to correlate human judgement with the generation errors discussed at the start of
this section, we have computed the same error measures on the data as partitioned for the
psychophysical test. These confirmed the earlier conclusions: the SLDS, which humans like
least, gives the highest likelihood and the worst generation errors while DPDS and Brand?s
model do not give significantly different errors.
5
Conclusion
In this work we have proposed a truly generative model, which allows real-time generation
of talking faces given speech. We have evaluated it both using multiple error measures
and with a thorough test of human perception. The latter test clearly shows that our
method perceptually outperforms the others and is virtually indistinguishable from reality.
Compared to Brand?s method it is slower during training, and cannot easily be trained in
the absence of labelling. This is a trade-off for the very fast generation and visually much
more appealing face animation.
In addition, we have shown that traditional metrics do not agree with human perception.
The error measures do not necessarily favour our method, but the human preference for
it is very significant. We believe this deserves deeper analysis. In future work, we plan
to investigate different error measures, especially on the more directly interpretable video
frames rather than on the extracted features. We also intend to experiment with a covariance
matrix per state and an unrestricted matrix structure for the transition matrix A?ts .
References
[1] David Barber. Expectation correction for smoothed inference in switching linear dynamical
systems. Journal of Machine Learning Research, 7:2515?2540, 2006.
[2] V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating faces in images and video. In
Proceedings of ACM SIGGRAPH, Annual Conference Series, 2003.
[3] M. Brand. Voice puppetry. In SIGGRAPH ?99: Proceedings of the 26th annual conference on
Computer graphics and interactive techniques, pages 21?28, New York, NY, USA, 1999. ACM
Press/Addison-Wesley Publishing Co.
[4] C. Bregler, H. Hild, and S. Manke. Improving letter recognition by lipreading. In Proceedings
of ICASSP, 1993.
[5] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham. Active shape models, their training
and application. Comput. Vis. Image Underst., 61(1):38?59, 1995.
[6] T.F. Cootes, G.J. Edwards, and C.J. Taylor. Active appearance models. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 23(6):681?685, 2001.
[7] P. Duchnowski, U. Meier, and A. Weibel. See me, hear me: Integrating automatic speech
recognition and lipreading. In Proc. ICSLP 94, 1994.
[8] G. Edwards, C. Taylor, and T. Cootes. Interpreting face images using active appearance
models, 1998.
[9] T. F. Ezzat, G. Geiger, and T. Poggio. Trainable videorealistic speech animation. In SIGGRAPH ?02: Proceedings of the 29th annual conference on Computer graphics and interactive
techniques, pages 388?398, New York, NY, USA, 2002. ACM Press.
[10] H. McGurk and J. MacDonald. Hearing lips and seeing voices. Nature, pages 746 ? 748,
December 1976.
[11] Alan B. Poritz. Linear predictive hidden markov models and the speech signal. Proc. IEEE
Int. Conf. Acoustics, Speech and Signal Processing, 7:1291?1294, May 1982.
[12] L. R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. In Readings in speech recognition, pages 267?296. Morgan Kaufmann Publishers Inc.,
San Francisco, CA, USA, 1990.
[13] B. Theobald, G. Cawley, I. Matthews, J. Glauert, and J. Bangham. 2.5D visual speech synthesis
using appearance models. Proceedings of the British Machine Vision Conference, 2003.
[14] M. A. Turk and A. P. Pentland. Face recognition using eigenfaces. Proc. IEEE Conf. Computer
Vision and Pattern Recognition, pages 586?591, 1991.
[15] Mike West and Jeff Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1999.
[16] S. Young. The HTK hidden markov model toolkit: Design and philosophy, 1993.
7
A
Parameter estimation in DPDS
The log-likelihood of a sequence is given by eq. 1, which is a multiplicative function of A
(x1 = f (A?1s ), x2 = f (A?2s A?1s ), etc.). Applying the chain rule repeatedly gives us, for
diagonal matrices
and using Lt to denote the log-likelihood
of a single observation at time
s
s
s
t, that ?L1 ?An = 0 and ?Lt ?An = R?1
?ts (yt ? xt )(?xt ?An ) for 2 6 t 6 T , where
?xs
?xst
= xst ?n?ts + A?ts t?1 , and ?n?ts = 1 iff n = ?ts
?An
?An
(2)
There we give the gradients for diagonal matrices for simplicity of notation and because we
used diagonal matrices for this work, but the same principle applies
to full matrices. The
PS PTs
gradient of the likelihood is then ?L ?An = s=1 t=2
?Ls,t ?An . In general the same
is done for the other parameters of the model, however when the covariance is shared by all
states, the value of the other parameters can be maximised exactly as described below. In
the following, superscripts differentiate between variables by indicating
what the variable is
P
PS PTs
S
a coefficient to. The covariance R = s=1 t=2
(yts ? xst )(yts ? xst )?
s=1 (Ts ? 1) where
s
s
s
x1 = ??1s , xt = A?ts xt?1 + ? ?ts , while ??ts and ? ?ts are found by solving the system of linear
equations (3) for which the coefficients D and b are computed by Algorithm 1, which takes
{?}, {y} and the current values of A1...? as input:
?
?
?"
?
X1,1 ? ? ? X1,?
#
# "
??
??
?
?
?
???1
diag??? (Dn ) D???
b??1
..
.. ? (3)
..
?
?
where X??? , ?
=
.
?
?
?
??
.
.
? ??1
b??1
D???
D??
???
X1,? ? ? ? X?,?
Algorithm 1 Maximisation of L with respect to ? and ?
for n ? {1 . . . ?} do
?
??
b?
n ? 0, bn ? 0, Dn ? 0
??
??
?m ? {1 . . . ?}: Dn,m , D??
n,m , Dn ? 0
s
?? ?
for s ? {s|?0 = n} do
? Compute coefficients D??
n ,Dnx ,bn to ?n
??
??
?
?
?
s
Dn ? Dn + I, D ? I, bn ? bn + yt
?
?m ? {1 . . . ?}: C?
? C?
m ?0
m and D below are temporary variables
for t ? {2 . . . Ts } do
??
? ?
?
?
? s
D? ? D? + A?ts D? , D??
n ? Dn + D D , bn ? bn + D yt
?
?
??
??
?
?
?m ? {1 . . . ?}: Cm ? A?ts Cm , Dn,m ? Dn,m + D Cm
?
C?
?ts ? C?ts + I
end for
end for
?? ?
for s ? {1 . . . S} do
? Compute coefficients D??
nx ,Dnx ,bn to ? n
?m ? {1 . . . ?}: C?m ? 0
D? ? 0, C? ? I
? C?m , D? , C?
m are temporary variables
for t ? {2 . . . Ts } do
D? ? A?ts D? ,
if ?ts = n then
D? ? D? + I
end if
??
? ?
?m ? {1 . . . ?}: C?m ? A?ts C?m , D??
n,m ? Dn,m + D Cm
??
??
?
?
?
?
?
C?ts ? C?ts + I, C ? A?ts C , D?1s ? D?1s + D C? , b?n ? b?n + D? yts
end for
end for
end for
8
| 3287 |@word version:1 judgement:1 polynomial:1 seems:1 underst:1 d2:1 bn:7 simplifying:1 covariance:5 weekday:1 reduction:1 series:1 animated:3 outperforms:1 existing:2 current:2 comparing:2 must:2 visible:1 realistic:9 informative:1 shape:8 plot:3 interpretable:1 poritz:1 generative:6 intelligence:1 selected:1 maximised:1 short:1 location:1 preference:1 dn:10 incorrect:1 consists:1 introduce:1 encouraging:1 enumeration:1 becomes:1 notation:1 lowest:1 what:1 cm:4 finding:2 y1s:1 remember:1 every:3 thorough:1 golden:1 voting:2 interactive:2 exactly:3 scaled:1 uk:5 partitioning:1 normally:1 appear:1 before:2 t1:9 engineering:1 maximise:1 switching:3 despite:1 puppetry:1 path:1 interpolation:1 approximately:1 might:1 twice:2 conversely:1 shaded:1 co:2 limited:1 range:2 statistically:1 satisfactorily:1 maximisation:1 block:1 significantly:2 thought:2 matching:1 projection:1 word:1 confidence:1 vetter:1 integrating:1 seeing:1 cannot:4 parametrise:1 storage:2 applying:4 www:2 deterministic:2 yt:14 starting:1 l:1 simplicity:1 rule:2 deriving:1 handle:1 avatar:1 pt:2 speak:1 exact:1 us:1 element:2 recognition:9 democracy:1 genuinely:2 observed:1 mike:1 solved:1 capture:1 worst:2 region:3 news:2 movement:4 prospect:1 highest:3 trade:1 substantial:1 discriminates:1 complexity:4 asked:2 dynamic:2 trained:8 yts:8 solving:4 predictive:1 triangle:1 easily:4 siggraph:3 icassp:1 differently:1 pitted:1 represented:1 train:1 undecided:1 forced:1 distinct:1 fast:1 artificial:2 dnx:2 outside:1 quite:1 solve:1 tested:1 reconstruct:2 blanz:1 unseen:2 itself:1 noisy:1 superscript:1 obviously:2 differentiate:1 sequence:70 advantage:1 interview:1 propose:2 combining:3 basso:1 flexibility:1 poorly:1 iff:1 realistically:1 manchester:7 double:2 convergence:1 p:2 produce:1 generating:7 object:2 tim:2 ac:5 fixing:1 augmenting:1 school:2 noticeable:1 transcript:1 eq:1 edward:2 job:1 strong:2 c:3 indicate:2 convention:1 closely:1 annotated:1 correct:1 human:11 public:2 require:1 icslp:1 suitability:1 bregler:1 insert:1 correction:2 hild:1 considered:2 magnus:2 ground:3 visually:1 great:1 viterbi:1 matthew:1 perceived:1 estimation:2 proc:3 label:25 sensitive:1 largest:1 tool:2 clearly:4 gaussian:2 aim:1 rather:2 emission:1 viz:1 notational:1 consistently:1 modelling:1 likelihood:20 greatly:1 visibly:1 rigorous:2 inference:1 breath:3 eliminate:1 hidden:10 flexible:1 denoted:1 priori:1 plan:1 once:1 extraction:1 manually:2 represents:2 look:1 constitutes:1 future:1 others:1 few:1 resulted:1 attempt:1 investigate:1 evaluation:5 mixture:1 truly:1 parametrised:1 chain:2 capable:2 necessary:1 experience:1 poggio:2 respective:1 facial:5 decoupled:1 incomplete:1 taylor:3 circle:1 fitted:2 instance:1 industry:1 earlier:2 asking:2 deserves:1 introducing:1 hearing:1 conducted:1 graphic:2 dependency:1 corrupted:1 synthetic:2 combined:1 person:2 density:1 thanks:1 probabilistic:4 told:1 off:1 manke:1 pool:2 together:1 synthesis:1 mcgurk:1 broadcast:2 worse:1 external:1 conf:2 warped:1 usable:1 derivative:1 availability:1 coefficient:7 int:1 inc:1 vi:1 blind:1 stream:6 multiplicative:1 depends:2 root:1 closed:2 performed:1 doing:1 start:1 square:2 publicly:1 phoneme:11 reserved:1 efficiently:2 kaufmann:1 rabiner:1 modelled:4 lds:2 raw:1 bayesian:1 emulated:1 mfcc:3 confirmed:1 explain:1 reach:1 sharing:1 against:3 frequency:1 turk:1 associated:1 sampled:2 dimensionality:3 improves:1 back:1 appears:1 wesley:1 dt:2 day:1 htk:1 cepstrum:1 formulation:1 evaluated:3 done:10 box:1 generality:1 just:1 biomedical:1 correlation:1 favourably:1 web:1 lack:2 brings:1 quality:4 indicated:4 believe:1 usa:3 contain:1 true:3 unbiased:1 former:1 hence:2 iteratively:1 indistinguishable:1 game:1 during:2 speaker:2 mel:1 leftmost:1 presenter:1 fatigue:1 complete:3 performs:7 motion:2 l1:4 interpreting:1 image:12 common:1 discussed:1 interpret:1 numerically:1 refer:1 significant:1 ai:2 automatic:2 rd:1 similarly:1 language:1 had:1 toolkit:1 etc:3 align:4 multivariate:1 triangulation:1 phonetic:2 life:1 vt:4 lipreading:2 seen:1 morgan:1 unrestricted:2 kit:1 freely:1 period:1 dashed:1 signal:2 multiple:3 sound:16 full:1 smooth:1 alan:1 match:1 offer:1 cross:4 a1:1 prediction:1 involving:1 optimisation:2 noiseless:1 metric:3 volunteer:5 expectation:1 iteration:2 vision:2 addition:1 separately:2 cawley:1 xst:16 interval:1 harrison:1 source:3 leaving:1 crucial:2 publisher:1 rest:1 meaningless:1 probably:1 markedly:1 subject:3 hz:2 virtually:1 sensical:1 december:1 contrary:1 split:2 variety:1 independence:1 restrict:1 opposite:1 reduce:2 favour:1 expression:1 handled:1 pca:3 rms:3 unnatural:1 effort:2 forecasting:1 monophones:1 speech:25 york:2 repeatedly:1 dramatically:1 generally:1 useful:2 iterating:1 listed:1 amount:3 band:1 ten:1 processed:1 reduced:1 generate:7 http:2 outperform:1 exist:1 tutorial:1 notice:1 delta:3 overly:1 per:2 rattray:2 discrete:4 key:3 four:1 nevertheless:1 d3:1 neither:1 kept:1 imaging:1 sum:1 letter:1 cootes:5 reasonable:1 decide:1 geiger:1 appendix:1 prefer:2 comparable:1 graham:1 capturing:1 correspondence:1 fold:5 quadratic:1 annual:3 x2:1 generates:2 extremely:1 min:1 span:1 format:1 relatively:1 jaw:1 poor:1 across:2 slightly:1 em:4 partitioned:2 appealing:1 modification:1 projecting:1 equation:6 agree:1 previously:2 mind:1 addison:1 ge:1 end:6 available:4 apply:1 voice:2 slower:1 original:4 include:1 publishing:1 graphical:1 giving:1 build:1 prof:1 especially:1 psychophysical:3 intend:1 quantity:2 diagonal:6 traditional:2 gradient:5 distance:2 link:1 macdonald:1 concatenation:1 hmm:8 nx:1 me:2 hesitation:1 barber:1 trivial:1 reason:2 length:2 index:1 relationship:1 convincing:1 unrolled:1 setup:1 implementation:1 design:1 unknown:1 perform:1 upper:1 observation:13 markov:5 t:58 pentland:1 immediate:1 looking:1 head:1 frame:16 y1:1 smoothed:2 intensity:4 ordinate:1 david:1 pair:3 meier:1 acoustic:1 temporary:2 hour:1 bar:1 dynamical:6 perception:2 mismatch:1 pattern:2 below:2 reading:1 hear:1 memory:1 video:38 mouth:1 natural:1 difficulty:1 treated:1 pause:1 improve:1 movie:1 eye:1 carried:1 extract:2 text:3 weibel:1 loss:1 generation:16 limitation:1 validation:5 sufficient:2 exciting:1 principle:1 uncorrelated:1 storing:1 row:1 totalling:1 last:2 english:1 silence:2 bias:2 side:2 deeper:1 xs1:3 wide:3 eigenfaces:2 face:31 distributed:1 dimension:1 depth:1 transition:2 world:1 autoregressive:1 forward:2 commonly:1 san:1 simplified:1 far:2 correlate:1 transaction:1 approximate:1 cutting:1 transcription:1 aam:6 ml:1 active:5 francisco:1 continuous:2 dpds:15 latent:1 reality:5 lip:4 table:3 learn:1 nature:1 ca:1 obtaining:1 improving:1 warranted:1 harshly:1 necessarily:1 diag:1 did:1 noise:8 animation:5 facilitating:1 x1:6 fig:1 representative:1 west:1 depicts:1 fashion:1 ny:2 cooper:1 position:3 exceeding:1 explicit:1 exponential:1 comput:1 young:1 removing:1 minute:2 down:1 bad:1 xt:9 british:1 showing:1 x:1 mp4:1 incorporating:2 intractable:1 albeit:1 sequential:2 texture:3 hoped:1 labelling:3 conditioned:1 illustrates:1 notwithstanding:1 perceptually:1 easier:2 slds:20 lt:1 appearance:5 likely:4 visual:1 prevents:1 talking:3 applies:1 springer:1 truth:3 extracted:6 acm:3 goal:1 month:1 identity:1 towards:1 labelled:5 shared:2 man:1 absence:2 change:1 jeff:1 telephone:1 determined:1 except:1 wt:4 total:2 accepted:1 brand:17 indicating:1 select:2 people:1 latter:2 philosophy:1 evaluate:6 audio:3 trainable:1 |
2,522 | 3,288 | Density Estimation under Independent Similarly
Distributed Sampling Assumptions
Tony Jebara, Yingbo Song and Kapil Thadani
Department of Computer Science
Columbia University
New York, NY 10027
{ jebara,yingbo,kapil }@cs.columbia.edu
Abstract
A method is proposed for semiparametric estimation where parametric and nonparametric criteria are exploited in density estimation and unsupervised learning.
This is accomplished by making sampling assumptions on a dataset that smoothly
interpolate between the extreme of independently distributed (or id) sample data
(as in nonparametric kernel density estimators) to the extreme of independent
identically distributed (or iid) sample data. This article makes independent similarly distributed (or isd) sampling assumptions and interpolates between these two
using a scalar parameter. The parameter controls a Bhattacharyya affinity penalty
between pairs of distributions on samples. Surprisingly, the isd method maintains
certain consistency and unimodality properties akin to maximum likelihood estimation. The proposed isd scheme is an alternative for handling nonstationarity in
data without making drastic hidden variable assumptions which often make estimation difficult and laden with local optima. Experiments in density estimation
on a variety of datasets confirm the value of isd over iid estimation, id estimation
and mixture modeling.
1 Introduction
Density estimation is a popular unsupervised learning technique for recovering distributions from
data. Most approaches can be split into two categories: parametric methods where the functional
form of the distribution is known a priori (often from the exponential family (Collins et al., 2002;
Efron & Tibshirani, 1996)) and non-parametric approaches which explore a wider range of distributions with less constrained forms (Devroye & Gyorfi, 1985). Parametric approaches can underfit
or may be mismatched to real-world data if they are built on incorrect a priori assumptions. A
popular non-parametric approach is kernel density estimation or the Parzen windows method (Silverman, 1986). However, these may over-fit thus requiring smoothing, bandwidth estimation and
adaptation (Wand & Jones, 1995; Devroye & Gyorfi, 1985; Bengio et al., 2005). Semiparametric
efforts (Olking & Spiegelman, 1987) combine the complementary advantages of both schools. For
instance, mixture models in their infinite-component setting (Rasmussen, 1999) as well as statistical
processes (Teh et al., 2004) make only partial parametric assumptions. Alternatively, one may seed
non-parametric distributions with parametric assumptions (Hjort & Glad, 1995) or augment parametric models with nonparametric factors (Naito, 2004). This article instead proposes a continuous
interpolation between iid parametric density estimation and id kernel density estimation. It makes
independent similarly distributed (isd) sampling assumptions on the data. In isd, a scalar parameter
? trades off parametric and non-parametric properties to produce an overall better density estimate.
The method avoids sampling or approximate inference computations and only recycles well known
parametric update rules for estimation. It remains computationally efficient, unimodal and consistent
for a wide range of models.
This paper is organized as follows. Section 2 shows how id and iid sampling setups can be smoothly
interpolated using a novel isd posterior which maintains log-concavity for many popular models.
Section 3 gives analytic formulae for the exponential family case as well as slight modifications
to familiar maximum likelihood updates for recovering parameters under isd assumptions. Some
consistency properties of the isd posterior are provided. Section 4 then extends the method to hidden
variable models or mixtures and provides simple update rules. Section 5 provides experiments
comparing isd with id and iid as well as mixture modeling. We conclude with a brief discussion.
2 A Continuum between id and iid
Assume we are given a dataset of N ? 1 inputs x1 , . . . , xN ?1 from some sample space ?. Given
a new query input xN also in the same sample space, density estimation aims at recovering a
density function p(x1 , . . . , xN ?1 , xN ) or p(xN |x1 , . . . , xN ?1 ) using a Bayesian or frequentist approach. Therefore, a general density estimation task is, given a dataset X = x1 , . . . , xN , recover
p(x1 , . . . , xN ). A common subsequent assumption is that the data points are id or independently
sampled which leads to the following simplification:
pid (X )
N
Y
=
pn (xn ).
n=1
The joint likelihood factorizes into a product of independent singleton marginals pn (xn ) each of
which can be different. A stricter assumption is that all samples share the same singleton marginal:
piid (X )
=
N
Y
p(xn ).
n=1
which is the popular iid sampling situation. In maximum likelihood estimation, either of the above
likelihood scores (pid or piid ) is maximized by exploring different settings of the marginals. The
id setup gives rise to what is commonly referred to as kernel density or Parzen estimation. Meanwhile, the iid setup gives rise to traditional iid parametric maximum likelihood (ML) or maximum
a posteriori (MAP) estimation. Both methods have complementary advantages and disadvantages.
The iid assumption may be too aggressive for many real world problems. For instance, data may
be generated by some slowly time-varying nonstationary distribution or (more distressingly) from
a distribution that does not match our parametric assumptions. Similarly, the id setup may be too
flexible and might over-fit when the marginal pn (x) is myopically recovered from a single xn .
Consider the parametric ML and MAP setting where parameters ? = {?1 , . . . , ?N } are used to
define the marginals. We will use p(x|?n ) = pn (x) interchangeably. The MAP id parametric
setting involves maximizing the following posterior (likelihood times a prior) over the models:
pid (X , ?)
=
N
Y
p(xn |?n )p(?n ).
n=1
To mimic ML, simply set p(?n ) to uniform. For simplicity assume that these singleton priors are
always kept uniform. Parameters ? are then estimated by maximizing pid . To obtain the iid setup,
we can maximize pid subject to constraints that force all marginals to be equal, in other words
?m = ?n for all m, n ? {1, . . . , N }.
Instead of applying N (N ? 1)/2 hard pairwise constraints in an iid setup, consider imposing
penalty functions across pairs of marginals. These penalty functions reduce the posterior score when
marginals disagree and encourage some stickiness between models (Teh et al., 2004). We measure
the level of agreement between two marginals pm (x) and pn (x) using the following Bhattacharyya
affinity metric (Bhattacharyya, 1943) between two distributions:
Z
B(pm , pn ) = B(p(x|?m ), p(x|?n )) =
p? (x|?m )p? (x|?n )dx.
This is a symmetric non-negative quantity in both distributions pm and pn . The natural choice
for the setting of ? is 1/2 and in this case, it is easy to verify the affinity is maximal and equals
one if and only if pm (x) = pn (x). A large family of alternative information divergences exist
to relate pairs of distributions (Topsoe, 1999) and are discussed in the Appendix. In this article,
the Bhattacharyya affinity is preferred since it has some useful computational, analytic, and logconcavity properties. In addition, it leads to straightforward variants of the estimation algorithms as
in the id and iid situations for many choices of parametric densities. Furthermore, (unlike Kullback
Leibler divergence) it is possible to compute the Bhattacharyya affinity analytically and efficiently
for a wide range of probability models including hidden Markov models (Jebara et al., 2004).
We next define (up to a constant scaling) the posterior score for independent similarly distributed
(isd) data:
Y
Y
p? (X , ?) ?
p(xn |?n )p(?n )
B ?/N (p(x|?m ), p(x|?n )).
(1)
n
m6=n
Here, a scalar power ?/N is applied to each affinity. The parameter ? adjusts the importance of the
similarity between pairs of marginals. Clearly, if ? ? 0, then the affinity is always unity and the
marginals are completely unconstrained as in the id setup. Meanwhile, as ? ? ?, the affinity is
zero unless the marginals are exactly identical. This produces the iid setup. We will refer to the isd
posterior as Equation 1 and when p(?n ) is set to uniform, we will call it the isd likelihood. One can
also view the additional term in isd as id estimation with a modified prior p?(?) as follows:
Y
Y
p?(?) ?
p(?n )
B ?/N (p(x|?m ), p(x|?n )).
n
m6=n
This prior is a Markov random field tying all parameters in a pairwise manner in addition to the
standard singleton potentials in the id scenario. However, this perspective is less appealing since it
disguises the fact that the samples are not quite id or iid.
One of the appealing properties of iid and id maximum likelihood estimation is its unimodality for
log-concave distributions. The isd posterior also benefits from a unique optimum and log-concavity.
However, the conditional distributions p(x|?n ) are required to be jointly log-concave in both parameters ?n and data x. This set of distributions includes the Gaussian distribution (with fixed variance)
and many exponential family distributions such as the Poisson, multinomial and exponential distribution. We next show that the isd posterior score for log-concave distributions is log-concave in ?.
This produces a unique estimate for the parameters as was the case for id and iid setups.
Theorem 1 The isd posterior is log-concave for jointly log-concave density distributions and for
log-concave prior distributions.
Proof 1 The isd log-posterior is the sum of the id log-likelihoods, the singleton log-priors and pairwise log-Bhattacharyya affinities:
log p? (X , ?) =
const +
X
n
log p(xn |?n ) +
X
n
log p(?n ) +
? XX
log B(pm , pn ).
N n
m6=n
The id log-likelihood is the sum of the log-probabilities of distributions that are log-concave in the
parameters and is therefore concave. Adding the log-priors maintains concavity since these are logconcave in the parameters. The Bhattacharyya affinities are log-concave by the following key result
(Prekopa, 1973). The Bhattacharyya affinity for log-concave distributions is given by the integral
over the sample space of the product of two distributions. Since the term in the integral is a product
of jointly log-concave distributions (by assumption), the integrand is a jointly log-concave function.
Integrating a log-concave function over some of its arguments produces a log-concave function in
the remaining arguments (Prekopa, 1973). Therefore, the Bhattacharyya affinity is log-concave in
the parameters of jointly log-concave distributions. Finally, since the isd log-posterior is the sum of
concave terms and concave log-Bhattacharyya affinities, it must be concave.
This log-concavity permits iterative and greedy maximization methods to reliably converge in practice. Furthermore, the isd setup will produce convenient update rules that build upon iid estimation
algorithms. There are additional properties of isd which are detailed in the following sections. We
first explore the ? = 1/2 setting and subsequently discuss the ? = 1 setting.
3 Exponential Family Distributions and ? = 1/2
We first specialize the above derivations to the case where the singleton marginals obey the exponential family form as follows:
p(x|?n ) = exp H(x) + ?nT T (x) ? A(?n ) .
An exponential family distribution is specified by providing H, the Lebesgue-Stieltjes integrator, ?n
the vector of natural parameters, T , the sufficient statistic, and A the normalization factor (which
is also known as the cumulant-generating function or the log-partition function). Tables of these
values are shown in (Jebara et al., 2004). The function A is obtained by normalization (a Legendre
transform) and is convex by construction. Therefore, exponential family distributions are always
log-concave in the parameters ?n . For the exponential family, the Bhattacharyya affinity is computable in closed form as follows:
B(pm , pn ) =
exp (A(?m /2 + ?n /2) ? A(?m )/2 ? A(?n )/2) .
Assuming uniform priors on the exponential family parameters, it is now straightforward to write
an iterative algorithm to maximize the isd posterior. We find settings of ?1 , . . . , ?N that maximize
the isd posterior or log p? (X , ?) using a simple greedy method. Assume a current set of parameters is available ??1 , . . . , ??N . We then update a single ?n to increase the posterior while all other
? /n ) remain fixed at their previous settings. It suffices to consider only terms
parameters (denoted ?
in log p? (X , ?) that are variable with ?n :
X
? /n ) = const + ?nT T (xn ) ? N + ?(N ? 1) A(?n ) + 2?
log p? (X , ?n , ?
A(??m /2 + ?n /2).
N
N
m6=n
If the exponential family is jointly log-concave in parameters and data (as is the case for Gaussians),
this term is log-concave in ?n . Therefore, we can take a partial derivative of it with respect to ?n and
set to zero to maximize:
?
?
X
N
?T (xn ) + ?
A? (?n ) =
A? (??m /2 + ?n /2)? .
(2)
N + ?(N ? 1)
N
m6=n
For the Gaussian mean case (i.e. a white Gaussian with covariance locked at identity), we have
A(?) = ?T ?. Then a closed-form formula is easy to recover from the above1. However, a simpler
iterative update rule for ?n is also possible as follows. Since A(?) is a convex function, we can
compute a linear variational lower bound on each A(?m /2 + ?n /2) term for the current setting of
?n :
? /n ) ? const + ?nT T (xn ) ? N + ?(N ? 1) A(?n )
log p? (X , ?n , ?
N
T
? X
+
2A(??m /2 + ??n /2) + A? (??m /2 + ??n /2) (?n ? ??n ).
N
m6=n
This gives an iterative update rule of the form of Equation 2 where the ?n on the right hand side is
kept fixed at its previous setting (i.e. replace the right hand side ?n with ??n ) while the equation is
iterated multiple times until the value of ?n converges. Since we have a variational lower bound,
each iterative update of ?n monotonically increases the isd posterior. We can also work with a robust
(yet not log-concave) version of the isd score which has the form:
?
?
X
X
X
X
?
log p?? (X , ?) = const +
log p(xn |?n ) +
log p(?n ) +
log ?
B(pm , pn )? .
N
n
n
n
m6=n
and leads to the general update rule (where ? = 0 reproduces isd and larger ? increases robustness):
?
?
X (N ? 1)B ? (p(x|??m ), p(x|??n ))
N
?
?T (xn ) +
A? (?n ) =
A? (??m /2 + ??n /2)? .
P
? (p(x|?
?l ), p(x|??n ))
N + ?(N ? 1)
N
B
l6=n
m6=n
We next examine marginal consistency, another important property of the isd posterior.
1
The update for the Gaussian mean with covariance=I is: ?n =
1
(N xn
N+?(N?1)/2
+ ?/2
P
m6=n
??m ).
3.1 Marginal Consistency in the Gaussian Mean Case
For marginal consistency, if a datum and model parameter are hidden and integrated over, this should
not change our estimate. It is possible to show that the isd posterior is marginally consistent at least
in the Gaussian mean case (one element of the exponential family). In other words, marginalizing
over an observation and its associated marginal?s parameter (which can be taken to be xN and ?N
without loss of generality) still produces a similar isd posterior on the remaining observations X/N
and parameters ?/N . Thus, we need:
Z Z
p? (X , ?)dxN d?N ? p? (X/N , ?/N ).
We then would recover the posterior formed using the formula in Equation 1 with only N ? 1
observations and N ? 1 models.
Theorem 2 The isd posterior with ? = 1/2 is marginally consistent for Gaussian distributions.
Proof 2 Start by integrating over xN :
Z
p? (X , ?)dxN
N
?1
Y
?
p(xi |?i )
N
Y
p(?n )
n=1
i=1
N
Y
B 2?/N (pm , pn )
m=n+1
Assume the singleton prior p(?N ) is uniform and integrate over ?N to obtain:
Z Z
p? (X , ?)dxN d?N
?
N
?1
Y
p(xi |?i )
N
?1
Y
N
?1
Y
B 2?/N (pm , pn )
n=1 m=n+1
i=1
Z NY
?1
B 2?/N (pm , pN )d?N
m=1
Consider only the right hand integral and impute the formula for the Bhattacharyya affinity:
!
Z NY
Z
?1
N
?1
X
2?
?
?
A(?
)
A(?
)
m
N
m
N
B 2?/N (pm , pN )d?N =
exp
A
+
?
?
d?N
N m=1
2
2
2
2
m=1
In the (white) Gaussian case A(?) = ?T ? which simplifies the above into:
!
Z NY
Z
?1
N ?1
2? X
?m
?N
2?/N
B
(pm , pN )d?N =
exp ?
A
?
d?N
N m=1
2
2
m=1
!
N
?1 N
?1
X
X
2?
?m
?n
A(?m ) A(?n )
? exp
A
+
?
?
N (N ? 1) n=1 m=n+1
2
2
2
2
?
N
?1
Y
N
?1
Y
2?
B N (N ?1) (pm , pn )
n=1 m=n+1
Reinserting the integral changes the exponent of the pairs of Bhattacharyya affinities between the
(N ? 1) models raising it to the appropriate power ?/(N ? 1):
Z Z
p? (X , ?)dxN d?N
?
N
?1
Y
i=1
p(xi |?i )
N
?1
Y
N
?1
Y
B 2?/(N ?1) (pm , pn ) = p? (X/N , ?/N ).
n=1 m=n+1
Therefore, we get the same isd score that we would have obtained had we started with only (N ? 1)
data points. We conjecture that it is possible to generalize the marginal consistency argument to
other distributions beyond the Gaussian. The isd estimator thus has useful properties and still agrees
with id when ? = 0 and iid when ? = ?. Next, the estimator is generalized to handle distributions
beyond the exponential family where latent variables are implicated (as is the case for mixtures of
Gaussians, hidden Markov models, latent graphical models and so on).
4 Hidden Variable Models and ? = 1
One important limitation of most divergences between distributions is that they become awkward
when dealing with hidden variables or mixture models. This is because they may involve intractable
integrals. The Bhattacharyya affinity with the setting ? = 1, also known as the probability product
kernel, is an exception to this since it only involves integrating the product of two distributions.
In fact, it is known that this affinity is efficient to compute for mixtures of Gaussians, multinomials and even hidden Markov models (Jebara et al., 2004). This permits the affinity metric to
efficiently pull together parameters ?m and ?n . However, for mixture models, there is the presence
of hidden variables
h in addition to observed variables. Therefore, we replace all the marginals
P
p(x|?n ) =
h p(x, h|?n ). The affinity is still straightforward to compute for any pair of latent
variable models (mixture models, hidden Markov models and so on). Thus, evaluating the isd posterior is straightforward for such models when ? = 1. We next provide a variational method that
makes it possible to maximize a lower bound on the isd posterior in these cases.
? = ??1 , . . . , ??N . We will find a new setting for ?n
Assume a current set of parameters is available ?
? /n ) remain fixed at their previous
that increases the posterior while all other parameters (denoted ?
settings. It suffices to consider only terms in log p? (X , ?) that depend on ?n . This yields:
Z
X
? /n ) = const + log p(xn |?n )p(?n ) + 2?
log p? (X , ?n , ?
log p(x|??m )p(x|?n )dx
N
m6=n
Z
2? X
? const + log p(xn |?n )p(?n ) +
p(x|??m ) log p(x|?n )dx
N
m6=n
? /n ) which is a
The application of Jensen?s inequality above produces an auxiliary function Q(?n |?
lower-bound
on the log-posterior. Note that each density function has hidden variables, p(xn |?n ) =
P
p(x
,
h|?
n
n ). Applying Jensen?s inequality again (as in the Expectation-Maximization or EM
h
algorithm) replaces the log-incomplete likelihoods over h with expectations over the complete pos? =
teriors given the previous parameters ??n . This gives isd the following auxiliary function Q(?n |?)
Z
X
X
2? X
p(h|xn , ??n ) log p(xn , h|?n ) + log p(?n ) +
p(x|??m )
p(h|x, ??n ) log p(x, h|?n )dx.
N
m6=n
h
h
This is a variational lower bound which can be iteratively maximized instead of the original isd
? in some mixture
posterior. While it is possible to directly solve for the maximum of Q(?n |?)
models, in practice, a further simplification is to replace the integral over x with synthesized samples
drawn from p(x|??m ). This leads to the following approximate auxiliary function (based on the law
of large numbers) which is merely the update rule for EM for ?n with s = 1, . . . , S virtual samples
? n |?)
? =
xm,s obtained from the m?th model p(x|??m ) for each of the other N ? 1 models, Q(?
X
h
2? X X X
p(h|xn , ??n ) log p(xn , h|?n ) + log p(?n ) +
p(h|xm,s , ??n ) log p(xm,s , h|?n ).
SN
s
m6=n
h
We now have an efficient update rule for latent variable models (mixtures, hidden Markov models,
etc.) which maximizes a lower bound on p? (X , ?). Unfortunately, as with most EM implementations, the arguments for log-concavity no longer hold.
5 Experiments
A preliminary way to evaluate the usefulness of the isd framework is to explore density estimation
over real-world datasets under varying ?. If we set ? large, we have the standard iid setup and
only fit a single parametric model to the dataset. For small ?, we obtain the kernel density or
Parzen estimator. In between, an iterative algorithm is available to maximize the isd posterior to
?
obtain potentially superior models ?1? , . . . , ?N
. Figure 1 shows the isd estimator with Gaussian
models on a ring-shaped 2D dataset. The new estimator recovers the shape of the distribution more
accurately. To evaluate performance on real data, we aggregate the isd learned models into a single
density estimate as is done with Parzen estimators and compute the iid likelihood of held out test
2
1.5
1.5
1.5
1.5
1
1
0.5
0.5
1
1
0.5
0.5
0
0
0
0
?0.5
?0.5
?0.5
?0.5
?1
?1
?1.5
?1
?1.5
?2
?2
?1
0
1
?1
?1.5
?2
2
? = 0, ? = 0
?1.5
?1
?0.5
0
0.5
1
1.5
2
?1.5
?2
?1.5
? = 1, ? = 0
?1
?0.5
0
0.5
1
1.5
2
?2
? = 2, ? = 0
?1.5
?1
?0.5
0
0.5
1
1.5
2
? = ?, ? = 0
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
1.5
1
0.5
0
0
0
0
?0.5
?0.5
?0.5
?0.5
?1
?1
?1
?1
?1.5
?1.5
?1.5
?2
?2
?1
0
1
? = 0, ? =
2
1
2
?2
?1.5
?1
?0.5
0
0.5
1
1.5
? = 1, ? =
1
2
2
?1.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
? = 2, ? =
1
2
2
?2
?1.5
?1
?0.5
0
0.5
1
? = ?, ? =
1.5
2
1
2
Figure 1: Estimation with isd for Gaussian models (mean and covariance) on synthetic data.
Dataset
id
SPIRAL
-5.61e3
MIT-CBCL -9.82e2
HEART
-1.94e3
DIABETES -6.25e3
CANCER -5.80e3
LIVER
-3.41e3
iid-1
-1.36e3
-1.39e3
-2.02e4
-2.12e5
-7.22e6
-2.53e4
iid-2
-1.36e3
-1.19e3
-3.23e4
-2.85e5
-2.94e6
-1.88e4
iid-3
-1.19e3
-1.00e3
-2.50e4
-4.48e5
-3.92e6
-2.79e4
iid-4
-7.98e2
-1.01e3
-1.68e4
-2.03e5
-4.08e6
-2.62e4
iid-5
-6.48e2
-1.10e3
-3.15e4
-3.40e5
-3.96e6
-3.23e4
iid-? isd ? = 0 isd ? = 12
-4.86e2 -2.26e2
-1.19e2
-3.14e3 -9.79e2
-9.79e2
-4.02e2 -4.51e2
-4.47e2
-8.22e2 -8.28e2 -8.09e2
-1.22e2 -5.54e2
-5.54e2
-4.56e2 -4.74e2
-4.69e2
Table 1: Gaussian test log-likelihoods using id, iid, EM, ? GMM and isd estimation.
P
P
data via ? log N1 n p(x? |?n? ) . A larger score implies a better p(x) density estimate. Table 1
summarizes experiments with the Gaussian (mean and covariance) models. On 6 standard datasets,
we show the average test log-likelihood of Gaussian estimation while varying the settings of ?
compared to a single iid Gaussian, an id Parzen RBF estimator and a mixture of 2 to 5 Gaussians
using EM. Comparisons with (Rasmussen, 1999) are also shown. Cross-validation was used to
choose the ?, ? or EM local minimum (from ten initializations), for the id, isd and EM algorithms
respectively. Train, cross-validation and test split sizes where 80%, 10% and 10% respectively. The
test log-likelihoods show that isd outperformed iid, id and EM estimation and was comparable to
infinite Gaussian mixture (iid??) models (Rasmussen, 1999) (which is a far more computationally
demanding method). In another synthetic experiment with hidden Markov models, 40 sequences
of 8 binary symbols were generated using 2 state HMMs with 2 discrete emissions. However, the
parameters generating the HMMs were allowed to slowly drift during sampling (i.e. not iid). The
data was split into 20 training and 20 testing examples. Table 2 shows that the isd estimator for
certain values of ? produced higher test log-likelihoods than id and iid.
6 Discussion
This article has provided an isd scheme to smoothly interpolate between id and iid assumptions in
density estimation. This is done by penalizing divergence between pairs of models using a Bhattacharyya affinity. The method maintains simple update rules for recovering parameters for exponential families as well as mixture models. In addition, the isd posterior maintains useful log-concavity
and marginal consistency properties. Experiments show its advantages in real-world datasets where
id or iid assumptions may be too extreme. Future work involves extending the approach into other
aspects of unsupervised learning such as clustering. We are also considering computing the isd pos? = 0 ? = 1 ? = 2 ? = 3 ? = 4 ? = 5 ? = 10 ? = 20 ? = 30 ? = ?
-5.7153 -5.5875 -5.5692 -5.5648 -5.5757 -5.5825 -5.5849 -5.5856 -5.6152 -5.5721
Table 2: HMM test log-likelihoods using id, iid and isd estimation.
terior with a normalizing constant which depends on ? and thus permits a direct estimate of ? by
maximization instead of cross-validation2.
7 Appendix: Alternative Information Divergences
There is a large family of information divergences (Topsoe, 1999) between pairs of distributions
(Renyi measure, variational distance, ?2 divergence, etc.) that can be used to pull models pm and pn
towards each other. The Bhattacharya, though, is computationally easier to evaluate and minimize
over a wide range of probability models (exponential families, mixtures
and hidden Markov models).
R
An alternative is the Kullback-Leibler divergence D(pm kpn ) = pm (x)(log pm (x)?log pn (x))dx
and its symmetrized variant D(pm kpn )/2 + D(pn kpm )/2. The Bhattacharyya affinity is related to
the symmetrized variant of KL. Consider a variational distribution q that lies between the input pm
and pn . The log Bhattacharyya affinity with ? = 1/2 can be written as follows:
p
Z
pm (x)pn (x)
log B(pm , pn ) = log q(x)
dx ? ?D(qkpm )/2 ? D(qkpn )/2.
q(x)
Thus, B(pm , pn ) ? exp(?D(qkpm )/2 ? p
D(qkpn )/2). The choice of q that maximizes the lower
1
bound on the Bhattacharyya is q(x) = Z pm (x)pn (x). Here, Z = B(pm , pn ) normalizes q(x)
and is therefore equal to the Bhattacharyya affinity. Thus we have the following property:
?2 log B(pm , pn ) = min D(qkpm ) + D(qkpn ).
q
It is interesting to note that the Jensen-Shannon divergence (another symmetrized variant of KL)
emerges by placing the variational q distribution as the second argument in the divergences:
2JS(pm , pn ) = D(pm kpm /2 + pn /2) + D(pn kpm /2 + pn /2) = min D(pm kq) + D(pn kq).
q
Simple manipulations then show 2JS(pm , pn ) ? min(D(pm kpn ), D(pn kpm )). Thus, there are
close ties between Bhattacharyya, Jensen-Shannon and symmetrized KL divergences.
References
Bengio, Y., Larochelle, H., & Vincent, P. (2005). Non-local manifold Parzen windows. Neural Information
Processing Systems.
Bhattacharyya, A. (1943). On a measure of divergence between two statistical populations defined by their
probability distributions. Bull. Calcutta Math Soc.
Collins, M., Dasgupta, S., & Schapire, R. (2002). A generalization of principal components analysis to the
exponential family. NIPS.
Devroye, L., & Gyorfi, L. (1985). Nonparametric density estimation: The l1 view. John Wiley.
Efron, B., & Tibshirani, R. (1996). Using specially designed exponential families for density estimation. The
Annals of Statistics, 24, 2431?2461.
Hjort, N., & Glad, I. (1995). Nonparametric density estimation with a parametric start. The Annals of Statistics,
23, 882?904.
Jebara, T., Kondor, R., & Howard, A. (2004). Probability product kernels. Journal of Machine Learning
Research, 5, 819?844.
Naito, K. (2004). Semiparametric density estimation by local l2 -fitting. The Annals of Statistics, 32, 1162?
1192.
Olking, I., & Spiegelman, C. (1987). A semiparametric approach to density estimation. Journal of the American
Statistcal Association, 82, 858?865.
Prekopa, A. (1973). On logarithmic concave measures and functions. Acta. Sci. Math., 34, 335?343.
Rasmussen, C. (1999). The infinite Gaussian mixture model. NIPS.
Silverman, B. (1986). Density estimation for statistics and data analysis. Chapman and Hall: London.
Teh, Y., Jordan, M., Beal, M., & Blei, D. (2004). Hierarchical Dirichlet processes. NIPS.
Topsoe, F. (1999). Some inequalities for information divergence and related measures of discrimination. Journal of Inequalities in Pure and Applied Mathematics, 2.
Wand, M., & Jones, M. (1995). Kernel smoothing. CRC Press.
2
Work supported in part by NSF Award IIS-0347499 and ONR Award N000140710507.
| 3288 |@word kondor:1 version:1 kapil:2 covariance:4 score:7 bhattacharyya:21 recovered:1 comparing:1 nt:3 current:3 yet:1 dx:6 must:1 written:1 john:1 subsequent:1 partition:1 shape:1 analytic:2 designed:1 update:13 discrimination:1 greedy:2 blei:1 provides:2 math:2 simpler:1 direct:1 become:1 above1:1 incorrect:1 specialize:1 combine:1 fitting:1 manner:1 pairwise:3 examine:1 integrator:1 window:2 considering:1 provided:2 xx:1 maximizes:2 what:1 tying:1 concave:25 stricter:1 exactly:1 tie:1 control:1 local:4 id:29 interpolation:1 might:1 initialization:1 acta:1 hmms:2 range:4 gyorfi:3 locked:1 unique:2 testing:1 practice:2 silverman:2 qkpn:3 convenient:1 word:2 integrating:3 get:1 close:1 applying:2 map:3 maximizing:2 straightforward:4 laden:1 independently:2 convex:2 simplicity:1 pure:1 adjusts:1 estimator:9 rule:9 pull:2 population:1 handle:1 annals:3 construction:1 agreement:1 diabetes:1 element:1 observed:1 trade:1 yingbo:2 depend:1 naito:2 upon:1 completely:1 po:2 joint:1 unimodality:2 derivation:1 train:1 london:1 query:1 aggregate:1 quite:1 larger:2 solve:1 statistic:5 jointly:6 transform:1 beal:1 advantage:3 sequence:1 product:6 maximal:1 adaptation:1 optimum:2 extending:1 produce:7 generating:2 converges:1 ring:1 wider:1 liver:1 school:1 soc:1 recovering:4 c:1 involves:3 auxiliary:3 implies:1 larochelle:1 subsequently:1 virtual:1 crc:1 suffices:2 generalization:1 preliminary:1 exploring:1 hold:1 hall:1 exp:6 cbcl:1 seed:1 continuum:1 estimation:36 outperformed:1 topsoe:3 agrees:1 mit:1 clearly:1 always:3 gaussian:17 aim:1 modified:1 pn:35 factorizes:1 varying:3 emission:1 likelihood:18 posteriori:1 inference:1 integrated:1 hidden:14 overall:1 flexible:1 augment:1 priori:2 denoted:2 proposes:1 exponent:1 constrained:1 smoothing:2 marginal:8 equal:3 field:1 shaped:1 sampling:8 chapman:1 identical:1 placing:1 jones:2 unsupervised:3 mimic:1 future:1 divergence:13 interpolate:2 familiar:1 lebesgue:1 n1:1 mixture:16 extreme:3 held:1 integral:6 encourage:1 partial:2 unless:1 incomplete:1 instance:2 modeling:2 disadvantage:1 maximization:3 bull:1 uniform:5 usefulness:1 kq:2 too:3 synthetic:2 density:27 off:1 parzen:6 together:1 again:1 choose:1 slowly:2 disguise:1 american:1 derivative:1 aggressive:1 potential:1 singleton:7 includes:1 depends:1 view:2 stieltjes:1 closed:2 start:2 recover:3 maintains:5 minimize:1 formed:1 variance:1 efficiently:2 maximized:2 yield:1 generalize:1 bayesian:1 vincent:1 iterated:1 accurately:1 produced:1 iid:36 marginally:2 nonstationarity:1 e2:20 proof:2 associated:1 recovers:1 sampled:1 dataset:6 popular:4 efron:2 emerges:1 organized:1 higher:1 awkward:1 done:2 though:1 generality:1 furthermore:2 until:1 hand:3 requiring:1 verify:1 analytically:1 symmetric:1 leibler:2 iteratively:1 white:2 interchangeably:1 impute:1 during:1 criterion:1 generalized:1 complete:1 l1:1 variational:7 novel:1 common:1 superior:1 functional:1 multinomial:2 discussed:1 slight:1 association:1 marginals:12 synthesized:1 refer:1 imposing:1 unconstrained:1 consistency:7 pm:31 similarly:5 mathematics:1 had:1 similarity:1 longer:1 etc:2 j:2 posterior:27 perspective:1 scenario:1 kpm:4 certain:2 manipulation:1 inequality:4 binary:1 onr:1 accomplished:1 exploited:1 minimum:1 additional:2 converge:1 maximize:6 monotonically:1 ii:1 multiple:1 unimodal:1 recycles:1 match:1 cross:3 award:2 variant:4 metric:2 poisson:1 expectation:2 kernel:8 normalization:2 addition:4 semiparametric:4 myopically:1 unlike:1 specially:1 subject:1 logconcave:1 dxn:4 jordan:1 call:1 nonstationary:1 presence:1 hjort:2 split:3 identically:1 bengio:2 easy:2 variety:1 m6:13 fit:3 spiral:1 bandwidth:1 reduce:1 simplifies:1 computable:1 effort:1 akin:1 penalty:3 song:1 interpolates:1 york:1 e3:14 useful:3 detailed:1 involve:1 nonparametric:5 ten:1 category:1 schapire:1 exist:1 nsf:1 estimated:1 tibshirani:2 write:1 discrete:1 dasgupta:1 key:1 drawn:1 gmm:1 penalizing:1 kept:2 isd:51 merely:1 sum:3 wand:2 extends:1 family:18 appendix:2 scaling:1 summarizes:1 comparable:1 bound:7 simplification:2 datum:1 replaces:1 prekopa:3 constraint:2 interpolated:1 integrand:1 aspect:1 argument:5 min:3 glad:2 conjecture:1 department:1 legendre:1 across:1 remain:2 em:8 unity:1 appealing:2 making:2 modification:1 kpn:3 taken:1 heart:1 computationally:3 pid:5 equation:4 remains:1 discus:1 drastic:1 available:3 gaussians:4 permit:3 obey:1 hierarchical:1 appropriate:1 frequentist:1 alternative:4 robustness:1 bhattacharya:1 symmetrized:4 original:1 remaining:2 tony:1 clustering:1 dirichlet:1 graphical:1 l6:1 const:6 build:1 quantity:1 parametric:20 traditional:1 affinity:24 distance:1 sci:1 hmm:1 manifold:1 thadani:1 assuming:1 devroye:3 providing:1 difficult:1 setup:11 unfortunately:1 potentially:1 relate:1 negative:1 rise:2 implementation:1 reliably:1 teh:3 disagree:1 observation:3 datasets:4 markov:8 howard:1 situation:2 jebara:6 drift:1 pair:8 required:1 specified:1 kl:3 raising:1 learned:1 nip:3 beyond:2 xm:3 built:1 including:1 power:2 demanding:1 natural:2 force:1 scheme:2 brief:1 started:1 stickiness:1 columbia:2 sn:1 prior:9 l2:1 marginalizing:1 law:1 loss:1 interesting:1 limitation:1 validation:2 integrate:1 sufficient:1 consistent:3 article:4 share:1 normalizes:1 cancer:1 surprisingly:1 supported:1 rasmussen:4 implicated:1 side:2 mismatched:1 wide:3 distributed:6 benefit:1 xn:30 world:4 avoids:1 evaluating:1 concavity:6 commonly:1 far:1 approximate:2 preferred:1 kullback:2 confirm:1 ml:3 reproduces:1 dealing:1 conclude:1 xi:3 alternatively:1 distressingly:1 continuous:1 iterative:6 latent:4 table:5 robust:1 e5:5 meanwhile:2 logconcavity:1 underfit:1 allowed:1 complementary:2 x1:5 referred:1 ny:4 wiley:1 exponential:17 lie:1 renyi:1 calcutta:1 formula:4 theorem:2 e4:10 jensen:4 symbol:1 normalizing:1 intractable:1 adding:1 importance:1 easier:1 smoothly:3 logarithmic:1 simply:1 explore:3 scalar:3 terior:1 conditional:1 identity:1 rbf:1 towards:1 replace:3 hard:1 change:2 infinite:3 piid:2 principal:1 shannon:2 exception:1 e6:5 collins:2 cumulant:1 evaluate:3 handling:1 |
2,523 | 3,289 | GRIFT: A graphical model for inferring visual
classification features from human data
Michael G. Ross
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
mgross@mit.edu
Andrew L. Cohen
Psychology Department
University of Massachusetts Amherst
Amherst, MA 01003
acohen@psych.umass.edu
Abstract
This paper describes a new model for human visual classification that enables the
recovery of image features that explain human subjects? performance on different visual classification tasks. Unlike previous methods, this algorithm does not
model their performance with a single linear classifier operating on raw image
pixels. Instead, it represents classification as the combination of multiple feature
detectors. This approach extracts more information about human visual classification than previous methods and provides a foundation for further exploration.
1
Introduction
Although a great deal is known about the low-level features computed by the human visual system,
determining the information used to make high-level visual classifications is an active area of research. When a person distinguishes between two faces, for example, what image regions are most
salient? Since the early 1970s, one of the most important research tools for answering such questions
has been the classification image (or reverse correlation) algorithm, which assumes a linear classification model [1]. This paper describes a new approach, GRIFT (GRaphical models for Inferring
Feature Templates). Instead of representing human visual discrimination as a single linear classifier,
GRIFT models it as the non-linear combination of multiple independently detected features. This
allows GRIFT to extract more detailed information about human classification.
This paper describes GRIFT and the algorithms for fitting it to data, demonstrates the model?s efficacy on simulated and human data, and concludes with a discussion of future research directions.
2
Related work
Ahumada?s classification image algorithm [1] models an observer?s classifications of visual stimuli
with a noisy linear classifier ? a fixed set of weights and a normally distributed threshold. The
random threshold accounts for the fact that multiple presentations of the same stimulus are often
classified inconsistently. In a typical classification image experiment, participants are presented
with hundreds or thousands of noise-corrupted examples from two categories and asked to classify
each one. The noise ensures that the samples cover a large volume of the sample space in order to
allow recovery of a unique linear classifier that best explains the data.
Although classification images are useful in many cases, it is well established that there are domains
in which recognition and classification are the result of combining the detection of parts or features, rather than applying a single linear template. For example, Pelli et al. [10], have convincingly
demonstrated that humans recognize noisy word images by parts, even when whole-word templates
would perform better. Similarly, Gold et al. [7] verified that subjects employed feature-based clas1
four square
S
class 1
?i
?i
N
class 1
samples
class 2
C
samples
light-dark
targets
?0
faces
class 2
?i
class 2
Fi
targets
samples
class 1
targets
Figure 1: Left: The GRIFT model is a Bayes net that describes classification as the result of combining N feature detectors. Right: Targets and sample stimuli from the three experiments.
sification strategies for some simple artificial image classes. GRIFT takes the next step and infers
features which predict human performance directly from classification data.
Most work on modeling non-linear, feature-based classification in humans has focused on verifying
the use of a predefined set of features. Recent work by Cohen et al. [4] demonstrates that Gaussian
mixture models can be used to recover features from human classification data without specifying
a fixed set of possible features. The GRIFT model, described in the remainder of this paper, has
the same goals as the previous work, but removes several limitations of the Gaussian mixture model
approach, including the need to only use stimuli the subjects classified with high confidence and
the bias that the signals can exert on the recovered features. GRIFT achieves these and other improvements by generatively modeling the entire classification process with a graphical model. Furthermore, the similarity between single-feature GRIFT models and the classification image process,
described in more detail below, make GRIFT a natural successor to the traditional approach.
3
GRIFT model
GRIFT models classification as the result of combining N conditionally independent feature detectors, F = {F1 , F2 , . . . , FN }. Each feature detector is binary valued (1 indicates detection), as is
the classification, C (1 indicates one class and 2 the other). The stimulus, S, is an array of continuously valued pixels representing the input image. The stimulus only influences C through the
feature detectors, therefore the joint probability of a stimulus and classification pair is
!
N
X
Y
P (C|F )P (S)
P (Fi |S) .
P (C, S) =
i
F
Figure 1 represents the causal relationship between these variables (C, F , and S) with a Bayesian
network. The network also includes nodes representing model parameters (?, ?, and ?), whose role
will be described below. The boxed region in the figure indicates the parts of the model that are
replicated when N > 1 ? each feature detector is represented by an independent copy of those
variables and parameters.
The distribution of the stimulus, P (S), is under the control of the experimenter. The algorithm for
fitting the model to data only assumes that the stimuli are independent and identically distributed
across trials. The conditional distribution of each feature detector?s value, P (Fi |S), is modeled with
a logistic regression function on the pixel values of S. Logistic regression is desirable because it is a
probabilistic linear classifier. Humans can successfully classify images in the presence of extremely
high additive noise, which suggests the use of averaging and contrast, linear computations which
2
are known to play important roles in human visual perception [9]. Just as the classification image
used a random threshold to represent uncertainty in the output of its single linear classifier, logistic
regression also allows GRIFT to represent uncertainty in the output of each of its feature detectors.
The conditional distribution of C is represented by logistic regression on the feature outputs.
Each Fi ?s distribution has two parameters, a weight vector ?i and a threshold ?i , such that
P (Fi = 1|S, ?i , ?i ) = (1 + exp(?i +
|S|
X
?ij Sj ))?1 ,
j=1
where |S| is the number of pixels in a stimulus. Similarly, the conditional distribution of C is
determined by ? = {?0 , ?1 , . . . , ?N } where
P (C = 1|F, ?) = (1 + exp(?0 +
N
X
?i Fi ))?1 .
i=1
Detecting a feature with negative ?i increases the probability that the subject will respond ?class 1,?
those with positive ?i are associated with ?class 2? responses.
A GRIFT model with N features applied to the classification of images each containing |S| pixels
has N (|S| + 2) + 1 parameters. This large number of parameters, coupled with the fact that the
F variables are unobservable, makes fitting the model to data very challenging. Therefore, GRIFT
defines prior distributions on its parameters. These priors reflect reasonable assumptions about the
parameter values and, if they are wrong, can be overturned if enough contrary data is available. The
prior on each of the ?i parameters for which i > 0 is a mixture of two normal distributions,
1
(?i ? 2)2
(?i + 2)2
P (?i ) = ? (exp(?
) + exp(?
)).
2
2
2 2?
This prior reflects the assumption that each feature detector should have a significant impact on the
classification, but no single detector should make it deterministic ? a single-feature model with
?0 = 0 and ?1 = ?2 has an 88% chance of choosing class 1 if the feature is active. The ?0
parameter has an improper non-informative prior, P (?0 ) = 1, indicating no preference for any
particular value [5] because the best ?0 is largely determined by the other ?i s and the distributions
of F and S. For analogous reasons, P (?i ) = 1.
The ?i parameters, which each have dimensionality equal to the stimulus, present the biggest inferential challenge. As mentioned previously, human visual processing is sensitive to contrasts between
image regions. If one image region is assigned positive ?ij s and another is assigned negative ?ij s,
the feature detector will be sensitive to the contrast between them. This contrast between regions requires all the pixels within each region to share similar ?ij values. To encourage this local structure,
the ?i parameters have Markov random field prior distributions:
?
??
?
2
2
2
Y
Y
(?ij + 1)
(?ij ? 1) ? ?
(?ij ? ?ik ) ?
P (?i ) ? ? (exp(?
) + exp(?
))
) ,
exp(?
2
2
2
j
(j,k)?A
where A is the set of neighboring pixel locations. The first factor encourages weight values to be
near the -1 to 1 range, while the second encourages the assignment of similar weights to neighboring
pixels. Fitting the model to data does not require the normalization of this distribution.
The Bayesian joint probability distribution of all the parameters and variables is
P (C, F, S, ?, ?, ?) = P (C|F, ?)P (S)P (?0 )
N
Y
P (Fi |S, ?i , ?i )P (?i )P (?i )P (?i ).
(1)
i=1
4
GRIFT algorithm
The goal of the algorithm is to find the parameters that satisfy the prior distributions and best account for the (S, C) samples gathered from a human subject. Mathematically, this goal corresponds
to finding the mode of P (?, ?, ?|S, C), where S and C refer to all of the observed samples. The
3
algorithm is derived using the expectation-maximization (EM) method [3], a widely used optimization technique for dealing with unobserved variables, in this case F, the feature detector outputs for
all the trials. In order to determine the most probable parameter assignments, the algorithm chooses
random initial parameters ?? = (? ? , ? ? , ?? ) and then finds the ? that maximizes
X
Q(?|?? ) =
P (F|S, C, ?? ) log P (C, F, S|?) + log P (?).
F
?
Q(?|? ) is the expected log posterior probability of the parameters computed by using the current ??
to estimate the distribution of F, the unobserved feature detector activations. The ? that maximizes
Q then becomes ?? for the next iteration, and the process is repeated until convergence.
The presence of both the P (C, F, S|?) and P (?) terms encourages the algorithm to find parameters
that explain the data and match the assumptions encoded in the parameter prior distributions. As the
amount of available data increases, the influence of the priors decreases, so it is possible to discover
features that are contrary to prior belief given enough evidence.
Using the conditional independences from the Bayes net:
?
Q(?|? ) ?
X
?
P (F|S, C, ? ) log P (C|F, ?) +
!
log P (Fi |S, ?i , ?i )
i=1
F
+
N
X
N
X
(log P (?i ) + log P (?i )) ,
i=1
dropping the log P (S) term, which is independent of the parameters, and the log P (?0 ) and
log P (?i ) terms, which are 0. As mentioned before, the normalization terms for the log P (?i )
elements can be ignored during optimization ? the log makes them additive constants to Q. The
functional form of every additive term is described in Section 3, and P (F|S, C, ?? ) can be calculated
using the model?s joint probability function (Equation 1).
Each iteration of EM requires maximizing Q, but it is not possible to compute the maximizing ? in
closed form. Fortunately, it is relatively easy to search for the best ?. Because Q is separable into
many additive components, it is possible to efficiently compute its gradient with respect to each of
the elements of ? and use this information to find a locally maximum ? assignment using the scaled
conjugate gradient algorithm [2]. Even a locally maximum value of ? usually provides good EM
results ? P (?, ?, ?|S, C) is still guaranteed to improve after every iteration.
The result of any EM procedure is only guaranteed to be a locally optimal answer, and finding the
globally optimal ? is made more challenging by the large number of parameters. GRIFT adopts
the standard solution of running EM many times, each instance starting with a random ?? , and then
accepting the ? from the run which produced the most probable parameters. For this model and the
data presented in the following sections, 20-30 random restarts were sufficient.
5
Experiments
The GRIFT model was fit to data from 3 experiments. In each experiment, human participants
classified stimuli into two classes. Each class contained one or more target stimuli. In each trial,
the participant saw a stimulus (a sample from S) that consisted of a randomly chosen target with
high levels of independent identically distributed noise added to each pixel. The noise samples were
drawn from a truncated normal distribution to ensure that the stimulus pixel values remained within
the display?s output range. Figure 1 shows the classes and targets from each experiment and a sample
stimulus from each class. In the four-square experiment four participants were asked to distinguish
between two artificial stimulus classes, one in which there were bright squares in the upper-left
or upper-right corners and one in which there were bright squares in the lower-left or lower-right
corners. In the light-dark experiment three participants were asked to distinguish between three
strips that each had two light blobs and three strips that each had only one light blob. Finally, in the
faces experiment three participants were asked to distinguish between two faces. The four-square
data were collected by [7] and were also analyzed in [4]. The other data are newly gathered. Each
data set consists of approximately 4000 trials from each subject. To maintain their interest in the
task, participants were given auditory feedback after each trial that indicated success or failure.
4
3
3
4
4
2
2
0.2
simulations
0.15
a
corners
bz
ACc
da
EA b
a
cbaz
JG d
cb
c
RS d
d
0.1
0.05
1
0.25
2
3
4
5
humans
0.2
0.15
0.1
0.05
1
2
3
4
N
+
Figure 2: The most probable ? parameters found for the four-square experiments for different values
of N and the4 mutual information between these feature detectors and the observed classifications.
especially sensitive to the random initialization procedure used to start
3 Fitting GRIFT
4 models is not
?
2
3
four square: most probable ?i values
N=2
N=3
N=4
mutual information
simulations
humans
2
top v.
bottom
N=1
5
5
6
6
7
7
each EM instance. The ? parameters were initialized by normal random samples and then half
8
9
10
3 were
4 so the features would tend to start evenly assigned to the two classes, except for ??0 ,
negated
3 which was initialized
4
to 0. In the four-square experiments, the ? ? parameters were initialized by
a mixture of normal distributions and in the light-dark experiments they were initialized from a
uniform distribution. In the faces experiments the ? ? were initialized by adding normal noise to the
8
9
10
optimal
linear
classifier separating the two targets. Because of the large number of pixels in the faces
stimuli, the other initialization procedures frequently produced initial assignments with extremely
low probabilities, which led to numerical precision problems. In the four-square experiments, the
? ? were initialized randomly. In the other experiments, the intent was to set them to the optimal
threshold for distinguishing the classes using the initial ? ? as a linear classifier, but a programming
error set them to the negation of that value. In most cases, the results were insensitive to the choice
of initialization method.
In the four-square experiment, the noise levels were continually adjusted to keep the participants?
performance at approximately 71% using the stair-casing algorithm [8]. This performance level is
high enough to keep the participants engaged in the task, but allows for sufficient noise to explore
their responses in a large volume of the stimulus space. After an initial adaptation period, the
noise level remains relatively constant across trials, so the inter-trial dependence introduced by the
stair-casing can be safely ignored. Two simulated observers were created to validate GRIFT on
the four-square task. Each used a GRIFT model with pre-specified parameters to probabilistically
classify four-square data at a fixed noise level, which was chosen to produce approximately 70%
correct performance. The corners observer used four feature detectors, one for each bright corner,
whereas the top-v.-bottom observer contrasted the brightness of the top and bottom pixels.
The result of using GRIFT to recover the feature detectors are displayed in Figure 2. Only the
? parameters are displayed because they are the most informative. Dark pixels indicate negative
weights and bright pixels correspond to positive weights. The presence of dark and light regions in a
feature detector indicates the computation of contrasts between those areas. The sign of the weights
is not significant ? given a fixed number of features, there are typically several equivalent sets of
feature detectors that only differ from each other in the signs of their ? terms and in the associated
? and ? values.
Because the optimal number of features for human subjects is unknown, GRIFT models with 1?4
features were fit to the data from each subject. The correct number of features could be determined
by holding out a test set or by performing cross-validation. Simulation demonstrated that a reliable
test set would need to contain nearly all of the gathered samples, and computational expense made
cross-validation impractical with our current MATLAB implementation. Instead, after recovering
the parameters, we estimated the mutual information between the unobserved F variables and the
observed classifications C. Mutual information measures how well the feature detector outputs can
5
predict the subject?s classification decision. Unlike the log likelihood of the observations, which is
dependent on the choice to model C with a logistic regression function, mutual information does
not assume a particular relationship between F and C and does not necessarily increase with N .
Plotting the mutual information as N increases can indicate if new detectors are making a substantial
contribution or are overfitting the data. On the simulated observers? data, for which the true values of
N were known, mutual information was a more accurate model selection indicator than traditional
statistics such as the Bayesian or Akaike information criteria [3].
Fitting GRIFT to the simulated observers demonstrated that if the model is accurate, the correct
features can be recovered reliably. The top-v.-bottom observer showed no substantial increase in
mutual information as the number of features increased from 1 to 4. Each set of recovered feature
detectors included a top-bottom contrast detector and other detectors with noisy ?i s that did not
contribute much to predicting C. Although the observer truly used two detectors, one top-brighter
detector and one bottom-brighter detector, the recovery of only one top-bottom contrast detector is
a success because one contrast detector plus a suitable ?0 term is logically equivalent to the original
two-feature model. The corners observer showed a substantial increase in mutual information as N
increased from 1 to 4 and the ? values clearly indicate four corner-sensitive feature detectors. The
corners data was also tested with a five-feature GRIFT model (? not shown) which produced four
corner detectors and one feature with noisy ?i . Its gain in mutual information was smaller than that
observed on any of the previous steps. Note the corner areas in the ?i s recovered from the corners
data are sometimes black and sometimes white. Recall that these are not image pixel values that the
detectors are attempting to match, but positive and negative weights indicating that the brightness in
the corner region is being contrasted to the brightness of the rest of the image.
Even though targets consisted of four bright-corner stimuli, recovering the parameters from the topv.-bottom observer never produced ? values indicating corner-specific feature detectors. An important advantages of GRIFT over previous methods such as [4] is that targets will not ?contaminate?
the recovered detectors. The simulations demonstrate that the recovered detectors are determined by
the classification strategy, not by the structure of the targets and classes.
The data of the four human participants revealed some interesting differences. Participants EA and
RS were naive, while AC and JG were not. The largest disparity was between EA and JG. EA?s
data indicated no consistent pattern of mutual information increase after two features, and the twofeature model appears to contain two top-bottom contrast detectors. Therefore, it is reasonable to
conclude that EA was not explicitly detecting the corners. At the other extreme is participant JG,
whose data shows four very clear corner detectors and a steady increase in mutual information up to
four features. Therefore, it seems very likely that this participant was matching corners and probably
should be tested with a five-feature model to gain additional insight. AC and RS?s data suggest three
corner detectors and a top-bottom contrast detector. GRIFT?s output indicates qualitative differences
in the classification strategies used by the four human participants.
Across all participants, the best one-feature model was based on the contrast between the top of the
image and the bottom. This is extremely similar to the result produced by a classification image of
the data, reinforcing the strong similarity between one-feature GRIFT and that approach.
In the light-dark and faces experiments, stair-casing was used the adjust the noise level to the 71%
performance level at the beginning of each session and then the noise level was fixed for the remaining trials to improve the independence of the samples. Participants were paid and promised a $10
reward for achieving the highest score on the task.
Participants P1, P2, and P3 classified the light-dark stimuli. P1 and P2 achieved at or above the expected performance level (82% and 73% accuracy), while P3?s performance was near chance (55%).
Because the noise levels were fixed after the first 101 trials, a participant with good luck at the end
of that period could experience very high noise levels for the remainder of the experiment, leading
to poor performance. All three participants appear to have used different classification methods,
providing a very informative contrast. The results of fitting the GRIFT model are in Figure 3.
The flat mutual information graph and the presence of a feature detector thresholding the overall
brightness for each value of N indicate that P1 pursued a one-feature, linear-classifier strategy. P2,
on the other hand, clearly employed a multi-feature, non-linear strategy. For N = 1 and N = 2, the
most interpretable feature detector is an overall brightness detector, which disappears when N = 3
and the best fit model consists of three detectors looking for specific patterns, one for each position a
6
a
b
light-dark: most probable ?i values
P1
c P2 a P3
N=1
0.25 d
b
N=2
0.25
c
0.2
d
N=3 0.2
9
8
7
N=4
5
a
ab
bc
P5 d
c
d
0.15
1
0.1
0.9
0.1
0.05
0.7
1
0.05
0.6
1
P6
2
3
N=5
4
2
3
2
3
mutual information
3
0.5
2
0.4
2
1
z
-
0.8
4
0
P4
0.15
6
faces: most probable ?i values
N=2
N=3
N=1
z
0.2
light-dark
4
0.15
4
0.1
0.05
0
1
2
3
4
5
mutual information
1
0.04
+
faces
0.03
0.02
0.01
0
1
2
3
N
N
34 +
Figure 3: The most probable ? parameters found for the light-dark and faces experiments for differ0.3
0.2 the mutual information between these feature detectors and the observed classifications.
ent N and
1
2
3
4
5
6
0.1
7
8
9
10
0
bright or dark
spot
Then
1
2
3can appear.
4
5
6
7when
8 N 9 = 410the overall brightness detector reappears, added to
the three spot detectors. Apparently the spot detectors are only effective if they are all present. With
only three available detectors, the overall brightness detector is excluded, but the optimal assignment
includes all four detectors. This is the best-fit model because increasing to N = 5 keeps the mutual
information constant and adds a detector that is active for every stimulus. Always active detectors
function as constant additions to ?0 , therefore this is equivalent to the N = 4 solution.
The GRIFT models of participant P3 do not show a substantial increase in mutual information as the
number of features rises. This lack of increase leads to the conclusion that the one-feature model is
probably the best fit, and since performance was extremely low, it can be assumed that the subject
was reduced to near random guessing much of the time.
The clear distinction between the results for all three subjects demonstrates the effectiveness of
GRIFT and the mutual information measure in distinguishing between classification strategies.
The faces presented the largest computational challenges. The targets were two unfiltered faces
from Gold et al.?s data set [6], down-sampled to 128x128. After the experiment, the stimuli were
down-sampled further to 32x32 and the background surrounding the faces was removed by cropping,
reducing the stimuli to 26x17. These steps made the algorithm computationally feasible, and reduced
the number of parameters so they would be sufficiently constrained by the samples.
The results for three participants (P4, P5, and P6) are in Figure 3. Participants P4 and P5?s data were
clearly best fit by one-feature GRIFT models. Increasing the number of features simply caused the
algorithm to add features that were never or always active. Never active features cannot affect the
classification, and, as explained previously, always active features are also superfluous. P4?s onefeature model clearly places significant weight near the eyebrows, nose, and other facial features.
P5?s one-feature weights are much noisier and harder to interpret. This might be related to P5?s poor
performance on the task ? only 53% accuracy compared to P4?s 72% accuracy. Perhaps the noise
level was too high and P5 was guessing rather than using image information much of the time.
Participant P6?s data did produce a two-feature GRIFT model, albeit one that is difficult to interpret
and which only caused a small rise in mutual information. Instead of recovering independent part
detectors, such as a nose detector and an eye detector, GRIFT extracted two subtly different holistic
feature detectors. Given P6?s poor performance (58% accuracy) these features may, like P5?s results,
be indicative of a guessing strategy that was not strongly influenced by the image information.
The results on faces support the hypothesis that face classification is holistic and configural, rather
than the result of part classifications, especially when individual feature detection is difficult [11].
7
Across these experiments, the data collected were compatible with the original classification image
method. In fact, the four-square human data were originally analyzed using that algorithm. One of
the advantages of GRIFT is that it can reanalyze old data to reveal new information. In the onefeature case, GRIFT enables the use of prior probabilities on the parameters, which may improve
performance when data is too scarce for the classification image approach. Most importantly, fitting
multi-feature GRIFT models can reveal previously hidden non-linear classification strategies.
6
Conclusion
This paper has described the GRIFT model for determining the features used in human image classification. GRIFT is an advance over previous methods that assume a single linear classifier on pixels
because it describes classification as the combination of multiple independently detected features. It
provides a probabilistic model of human visual classification that accounts for data and incorporates
prior beliefs about the features. The feature detectors it finds are associated with the classification
strategy employed by the observer and are not the result of structure in the classes? target images.
GRIFT?s value has been demonstrated by modeling the performance of humans on the four-square,
light-dark, and faces classification tasks and by successfully recovering the parameters of computer
simulated observers in the four-square task. Its inability to find multiple local features when analyzing human performance on the faces task agrees with previous results.
One of the strengths of the graphical model approach is that it allows easy replacement of model
components. An expert can easily change the prior distributions on the parameters to reflect knowledge gained in previous experiments. For example, it might be desirable to encourage the formation
of edge detectors. New resolution-independent feature parameterizations can be introduced, as can
transformation parameters to make the features translationally and rotationally invariant. If the features have explicitly parameterized locations and orientations, the model could be extended to model
their joint relative positions, which might provide more information about domains such as face classification. The success of this version of GRIFT provides a firm foundation for these improvements.
Acknowledgments
This research was supported by NSF Grant SES-0631602 and NIMH grant MH16745. The authors
thank the reviewers, Tom Griffiths, Erik Learned-Miller, and Adam Sanborn for their suggestions.
References
[1] A.J. Ahumada, Jr. Classification image weights and internal noise level estimation. Journal of Vision,
2(1), 2002.
[2] C.M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
[3] C.M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[4] A.L. Cohen, R.M. Shiffrin, J.M. Gold, D.A. Ross, and M.G. Ross. Inducing features from visual noise.
Journal of Vision, 7(8), 2007.
[5] A. Gelman, J.B. Carlin, H.S. Stern, and D.B. Rubin. Bayesian Data Analysis. Chapman & Hall/CRC,
2003.
[6] J.M. Gold, P.J. Bennett, and A.B. Sekuler. Identification of band-pass filtered letters and faces by human
and ideal observers. Vision Research, 39, 1999.
[7] J.M. Gold, A.L. Cohen, and R. Shiffrin. Visual noise reveals category representations. Psychonomics
Bulletin & Review, 15(4), 2006.
[8] N.A. Macmillan and C.D. Creelman. Detection Theory: A User?s Guide. Lawrence Erlbaum Associates,
2005.
[9] S.E. Palmer. Vision Science: Photons to Phenomenology. The MIT Press, 1999.
[10] D.G. Pelli, B. Farell, and D.C. Moore. The remarkable inefficiency of word recognition. Nature, 425,
2003.
[11] J. Sergent. An investigation into component and configural processes underlying face perception. British
Journal of Psychology, 75, 1984.
8
| 3289 |@word trial:9 version:1 seems:1 r:3 simulation:4 brightness:7 paid:1 harder:1 initial:4 generatively:1 inefficiency:1 uma:1 efficacy:1 disparity:1 score:1 bc:1 recovered:6 current:2 activation:1 fn:1 additive:4 numerical:1 informative:3 enables:2 remove:1 interpretable:1 discrimination:1 half:1 pursued:1 indicative:1 reappears:1 beginning:1 accepting:1 filtered:1 provides:4 detecting:2 node:1 location:2 preference:1 contribute:1 parameterizations:1 x128:1 five:2 ik:1 qualitative:1 consists:2 fitting:8 inter:1 expected:2 p1:4 frequently:1 multi:2 brain:1 globally:1 increasing:2 becomes:1 discover:1 underlying:1 maximizes:2 what:1 psych:1 finding:2 unobserved:3 transformation:1 impractical:1 configural:2 contaminate:1 safely:1 every:3 classifier:10 demonstrates:3 wrong:1 control:1 normally:1 grant:2 scaled:1 appear:2 continually:1 positive:4 before:1 local:2 analyzing:1 oxford:1 approximately:3 black:1 plus:1 exert:1 initialization:3 might:3 specifying:1 suggests:1 challenging:2 sekuler:1 palmer:1 range:2 unique:1 acknowledgment:1 x17:1 spot:3 procedure:3 area:3 inferential:1 matching:1 word:3 confidence:1 pre:1 griffith:1 suggest:1 cannot:1 selection:1 gelman:1 applying:1 influence:2 equivalent:3 deterministic:1 demonstrated:4 reviewer:1 maximizing:2 starting:1 independently:2 focused:1 resolution:1 recovery:3 x32:1 insight:1 array:1 importantly:1 analogous:1 target:13 play:1 user:1 programming:1 distinguishing:2 akaike:1 hypothesis:1 associate:1 element:2 recognition:4 observed:5 role:2 bottom:11 p5:7 verifying:1 thousand:1 region:8 ensures:1 improper:1 decrease:1 highest:1 luck:1 removed:1 mentioned:2 substantial:4 nimh:1 reward:1 asked:4 subtly:1 f2:1 easily:1 joint:4 represented:2 surrounding:1 effective:1 detected:2 artificial:2 formation:1 choosing:1 firm:1 whose:2 encoded:1 widely:1 valued:2 s:1 statistic:1 noisy:4 blob:2 advantage:2 net:2 remainder:2 adaptation:1 neighboring:2 p4:5 combining:3 holistic:2 shiffrin:2 gold:5 inducing:1 validate:1 ent:1 convergence:1 cropping:1 produce:2 adam:1 andrew:1 ac:2 ij:7 p2:4 strong:1 recovering:4 indicate:4 differ:1 direction:1 correct:3 exploration:1 human:28 successor:1 explains:1 require:1 crc:1 f1:1 investigation:1 probable:7 mathematically:1 adjusted:1 sufficiently:1 grift:42 hall:1 normal:5 exp:7 great:1 cb:1 lawrence:1 predict:2 achieves:1 early:1 estimation:1 ross:3 sensitive:4 saw:1 largest:2 agrees:1 successfully:2 tool:1 reflects:1 mit:2 clearly:4 gaussian:2 always:3 rather:3 probabilistically:1 derived:1 improvement:2 indicates:5 likelihood:1 logically:1 contrast:12 dependent:1 entire:1 typically:1 hidden:1 pixel:16 unobservable:1 classification:48 overall:4 orientation:1 constrained:1 mutual:20 equal:1 field:1 never:3 chapman:1 represents:2 nearly:1 future:1 stimulus:24 distinguishes:1 randomly:2 recognize:1 individual:1 translationally:1 replacement:1 maintain:1 negation:1 ab:1 detection:4 interest:1 adjust:1 mixture:4 analyzed:2 truly:1 extreme:1 light:12 stair:3 superfluous:1 predefined:1 accurate:2 edge:1 encourage:2 experience:1 facial:1 old:1 initialized:6 causal:1 instance:2 classify:3 modeling:3 increased:2 cover:1 assignment:5 maximization:1 hundred:1 uniform:1 erlbaum:1 too:2 answer:1 corrupted:1 chooses:1 person:1 amherst:2 probabilistic:2 michael:1 continuously:1 reflect:2 containing:1 cognitive:1 corner:18 expert:1 leading:1 account:3 photon:1 includes:2 satisfy:1 explicitly:2 caused:2 observer:13 closed:1 apparently:1 start:2 bayes:2 participant:23 recover:2 contribution:1 square:15 bright:6 accuracy:4 largely:1 efficiently:1 miller:1 gathered:3 correspond:1 raw:1 bayesian:4 identification:1 produced:5 classified:4 acc:1 explain:2 detector:57 influenced:1 strip:2 failure:1 associated:3 gain:2 newly:1 experimenter:1 auditory:1 massachusetts:2 sampled:2 recall:1 knowledge:1 infers:1 dimensionality:1 ea:5 appears:1 originally:1 restarts:1 tom:1 response:2 though:1 strongly:1 furthermore:1 just:1 p6:4 correlation:1 until:1 clas1:1 hand:1 lack:1 defines:1 logistic:5 mode:1 indicated:2 perhaps:1 reveal:2 consisted:2 contain:2 true:1 assigned:3 excluded:1 moore:1 deal:1 conditionally:1 white:1 during:1 encourages:3 steady:1 criterion:1 demonstrate:1 image:27 fi:8 functional:1 cohen:4 insensitive:1 volume:2 interpret:2 significant:3 refer:1 cambridge:1 similarly:2 session:1 had:2 jg:4 similarity:2 operating:1 add:2 posterior:1 recent:1 showed:2 reverse:1 binary:1 success:3 rotationally:1 fortunately:1 additional:1 employed:3 determine:1 period:2 signal:1 multiple:5 desirable:2 match:2 cross:2 impact:1 regression:5 overturned:1 vision:4 expectation:1 bz:1 iteration:3 represent:2 normalization:2 sometimes:2 achieved:1 whereas:1 addition:1 background:1 rest:1 unlike:2 probably:2 subject:11 tend:1 contrary:2 incorporates:1 effectiveness:1 near:4 presence:4 ideal:1 revealed:1 identically:2 enough:3 easy:2 independence:2 fit:6 psychology:2 brighter:2 affect:1 carlin:1 reinforcing:1 matlab:1 ignored:2 useful:1 detailed:1 clear:2 amount:1 dark:12 locally:3 band:1 category:2 reduced:2 nsf:1 sign:2 estimated:1 dropping:1 salient:1 four:23 threshold:5 promised:1 achieving:1 drawn:1 verified:1 graph:1 run:1 parameterized:1 uncertainty:2 respond:1 letter:1 place:1 reasonable:2 p3:4 decision:1 guaranteed:2 distinguish:3 display:1 strength:1 flat:1 extremely:4 performing:1 separable:1 attempting:1 relatively:2 department:2 combination:3 poor:3 conjugate:1 jr:1 describes:5 across:4 em:6 smaller:1 making:1 explained:1 invariant:1 computationally:1 equation:1 previously:3 remains:1 nose:2 end:1 available:3 phenomenology:1 original:2 inconsistently:1 assumes:2 running:1 ensure:1 top:10 remaining:1 graphical:4 especially:2 question:1 added:2 strategy:9 dependence:1 traditional:2 guessing:3 gradient:2 sanborn:1 thank:1 simulated:5 separating:1 sergent:1 evenly:1 collected:2 reason:1 erik:1 modeled:1 relationship:2 providing:1 difficult:2 holding:1 expense:1 negative:4 rise:2 intent:1 implementation:1 reliably:1 stern:1 unknown:1 perform:1 negated:1 upper:2 observation:1 markov:1 displayed:2 truncated:1 extended:1 looking:1 introduced:2 pair:1 specified:1 pelli:2 distinction:1 learned:1 established:1 below:2 perception:2 usually:1 pattern:4 challenge:2 convincingly:1 eyebrow:1 including:1 reliable:1 belief:2 suitable:1 natural:1 predicting:1 indicator:1 scarce:1 representing:3 improve:3 technology:1 eye:1 disappears:1 created:1 concludes:1 extract:2 coupled:1 naive:1 prior:13 review:1 determining:2 relative:1 interesting:1 limitation:1 suggestion:1 unfiltered:1 remarkable:1 validation:2 foundation:2 sufficient:2 consistent:1 rubin:1 plotting:1 thresholding:1 share:1 compatible:1 supported:1 copy:1 bias:1 allow:1 guide:1 institute:1 template:3 face:20 sification:1 bulletin:1 distributed:3 feedback:1 calculated:1 adopts:1 made:3 author:1 replicated:1 sj:1 keep:3 dealing:1 active:7 overfitting:1 reveals:1 conclude:1 assumed:1 search:1 nature:1 ahumada:2 boxed:1 necessarily:1 domain:2 da:1 did:2 whole:1 noise:18 repeated:1 biggest:1 precision:1 inferring:2 position:2 answering:1 down:2 remained:1 british:1 specific:2 bishop:2 evidence:1 albeit:1 adding:1 gained:1 led:1 simply:1 explore:1 likely:1 visual:13 contained:1 macmillan:1 springer:1 corresponds:1 chance:2 extracted:1 ma:2 conditional:4 goal:3 presentation:1 bennett:1 feasible:1 change:1 included:1 typical:1 determined:4 except:1 contrasted:2 averaging:1 reducing:1 pas:1 engaged:1 differ0:1 indicating:3 internal:1 support:1 inability:1 noisier:1 casing:3 tested:2 |
2,524 | 329 | Discovering Viewpoint-Invariant Relationships
That Characterize Objects
Richard S. Zemel and Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Toronto, ONT M5S lA4
Abstract
Using an unsupervised learning procedure, a network is trained on an ensemble of images of the same two-dimensional object at different positions,
orientations and sizes. Each half of the network "sees" one fragment of
the object, and tries to produce as output a set of 4 parameters that have
high mutual information with the 4 parameters output by the other half of
the network. Given the ensemble of training patterns, the 4 parameters on
which the two halves of the network can agree are the position, orientation,
and size of the whole object, or some recoding of them. After training,
the network can reject instances of other shapes by using the fact that the
predictions made by its two halves disagree. If two competing networks
are trained on an unlabelled mixture of images of two objects, they cluster
the training cases on the basis of the objects' shapes, independently of the
position, orientation, and size.
1
INTRODUCTION
A difficult problem for neural networks is to recognize objects independently of
their position, orientation, or size. Models addressing this problem have generally
achieved viewpoint-invariance either through a separate normalization procedure
or by building translation- or rotation-in variance into the structure of the network.
This problem becomes even more difficult if the network must learn to perform
viewpoint-invariant recognition without any supervision signal that indicates the
correct viewpoint, or which object is which during training.
In this paper, we describe a model that is trained on an ensemble of instances of the
same object, in a variety of positions, orientations and sizes, and can then recognize
299
300
Zemel and Hinton
new instances of that object. We also describe an extension to the model that allows
it to learn to recognize two different objects through unsupervised training on an
unlabelled mixture of images of the objects.
2
THE VIEWPOINT CONSISTENCY CONSTRAINT
An important invariant in object recognition is the fixed spatial relationship between
a rigid object and each of its component features. We assume that each feature has
an intrinsic reference frame, which can be specified by its instantiation parameters,
i.e., its position, orientation and size with respect to the image. For a rigid object
and a particular feature of that object, there is a fixed viewpoint-independent transformation from the feature's reference frame to the object's. Given the instantiation
parameters of the feature in an image, we can use the transformation to predict
the object's instantiation parameters. The viewpoint consistency constraint (Lowe,
1987) states that all of the features belonging to the same rigid object should make
consistent predictions of the object's instantiation parameters. This constraint has
been played an important role in many shape recognition systems (Roberts, 1965;
Ballard, 1981; Hinton, 1981; Lowe, 1985).
2.1
LEARNING THE CONSTRAINT: SUPERVISED
A recognition system that learns this constraint is TRAFFIC (Zemel, Mozer and
Hinton, 1989). In TRAFFIC, the constraints on the spatial relations between features of an object are directly expressed in a connectionist network. For twodimensional shapes, an object instantiation contains 4 degrees of freedom: (x ,y)position, orientation, and size. These parameter values, or some recoding of them,
can be represented in a set of 4 real-valued instantiation units. The network has
a modular structure, with units devoted to each object or object fragment to be
recognized. In a recognition module, one layer of instantiation units represents the
instantiation parameters of each of an object's features; these units connect to a set
of units that represent the object's instantiation parameters as predicted by this
feature; and these predictions are combined into a single object instantiation in another set of instantiation units. The set of weights connecting the instantiation units
of the feature and its predicted instantiation for the object are meant to capture
the fixed, linear reference frame transformation between the feature and the object.
These weights are trained by showing various instantiations of the object, and the
object's instantiation parameters act as the training signal for each of the features'
predictions. Through this supervised procedure, the features of an object learn to
predict the instantiation parameters for the object. Thus, when the features of the
object are present in the image in the appropriate relationship, the predictions are
consistent and this consistency can be used to decide that the object is present. Our
simulations showed that TRAFFIC was able to learn to recognize constellations in
realistic star-plot images.
2.2
LEARNING THE CONSTRAINT: UNSUPERVISED
The goal of the current work is to use an unsupervised procedure to discover and
use the consistency constraint.
Discovering Viewpoint-Invariant Relationships That Characterize Objects
maximize
mutual
information
a
I
0000
a
0 .... 0
?
b
0000
0 110 0 .. ??0 0
?
??
??
? ?
Output units
I
RBF units
??
??
?? ?
Input
?
Figure 1: A module with two halves that try to agree on their predictions. The
input to each half is 100 intensity values (indicated by the areas of the black circles).
Each half has 200 Gaussian radial basis units (constrained to be the same for the
two halves) connected to 4 output units.
We explore this idea using a framework similar to that of TRAFFIC, in which
different features of an object are represented in different parts of the recognition
module, and each part generates a prediction for the object's instantiation parameters. Figure 1 presents an example of the kind of task we would like to solve. The
module has two halves. The rigid object in the image is very simple - it has two
ends, each of which is composed of two Gaussian blobs of intensity. Each image
in the training set contains one instance of the object. For now, we constrain the
instantiation parameters of the object so that the left half of the image always contains one end of the object, and the right half the other end. This way, just based
on the end of the object in the input image that it sees, each half of the module
can always specify the position, orientation and size of the whole object. The goal
is that, after training, for any image containing this object, the output vectors of
both halves of the module, a and h, should both represent the same instantiation
parameters for the object.
In TRAFFIC, we could use the object's instantiation parameters as a training signal
for both module halves, and the features would learn their relation to the object.
Now, without providing a training signal, we would like the module to learn that
what is consistent across the ensemble of images is the relation between the position,
orientation, and size of each end of that object. The two halves of a module trained
on a particular shape should produce consistent instantiation parameters for any
instance of this object. If the features are related in a different way, then these
301
302
Zemel and Hinton
predictions should disagree. If the module learns to do this through an unsupervised
procedure, it has found a viewpoint-invariant spatial relationship that characterizes
the object, and can be used to recognize it.
3
THE IMAX LEARNING PROCEDURE
We describe a version of the IMAX learning procedure (Hinton and Becker, 1990)
that allows a module to discover the 4 parameters that are consistent between
the two halves of each image when it is presented with many different images of
the same, rigid object in different positions, orientations and sizes. Because the
training cases are all positive examples of the object, each half of the module tries
to extract a vector of 4 parameters that significantly agrees with the 4 parameters
extracted by the other half. Note that the two halves can agree on each instance
by outputting zero on each case, but this agreement would not be significant. To
agree significantly, each output vector must vary from image to image, but the
two output vectors must nevertheless be the same for each image. Under suitable
Gaussian assumptions, the significance of the agreement between the two output
vectors can be computed by comparing the variances across training cases of the
parameters produced by the individual halves of the module with the variances of
the differences of these parameters.
We assume that the two output vectors, a and h, are both noisy versions of the same
underlying signal, the correct object instantiation parameters. If we assume that
the noise is independent, additive, and Gaussian, the mutual information between
the presumed underlying signal and the average of the noisy versions of that signal
represented by a and his:
1
IL(a+ h)1
l(a; h) = - log -......:....--~
2
IL(a-h)1
(1)
where I I:(a+h) I is the determinant of the covariance matrix of the sum of a and
h (see (Becker and Hinton, 1989) for details). We train a recognition module by
setting its weights so as to maximize this objective function. By maximizing the
determinant, we are discouraging the components of the vector a + h from being
linearly dependent on one another, and thus assure that the network does not
discover the same parameter four times.
4
EXPERIMENTAL RESULTS
Using this objective function, we have experimented with different training sets,
input representations and network architectures. We discuss two examples here.
In all of the experiments described, we fix the number of output units in each module to be 4, matching the underlying degrees of freedom in the object instantiation
parameters. We are in effect telling the recognition module that there are 4 parameters worth extracting from the training ensemble. For some tasks there may
be less than 4 parameters. For example, the same learning procedure should be
able to capture the lower-dimensional constraints between the parts of objects that
Discovering Viewpoint-Invariant Relationships That Characterize Objects
contain internal degrees of freedom in their shape (e.g., scissors), but we have not
yet tested this.
The first set of experiments uses training images like Figure 1. The task requires
an intermediate layer between the intensity values and the instantiation parameters
vector. Each half of the module has 200 non-adaptive, radial basis units. The
means of the RBFs are formed by randomly sampling the space of possible images
of an end of the object; the variances are fixed. The output units are linear. We
maximize the objective function I by adjusting the weights from the radial basis
units to the output units, after each full sweep through the training set.
The optimization requires 20 sweeps of a conjugate gradient technique through 1000
training cases. Unfortunately, it is difficult to interpret the outputs of the module,
since it finds a nonlinear transform of the object instantiation parameters. But the
mutual information is quite high - about 7 bits. After training, the predictions
made by the two halves are consistent on new images We measure the consistency
in the predictions for an image using a kind of generalized Z-score, which relates
the difference between the predictions on a particular case (di) to the distribution
of this difference across the training set:
Z(di) = (di - d)t
L~l
(di - d)
(2)
A low Z-score indicates a consistent match. After training, the module produces
high Z-scores on images where the same two ends are present, but are in a different
relationship than the object on which it was trained. In general, the Z-scores
increase smoothly with the degree of perturbation in the relationship between the
two ends, indicating that the module has learned the constraint.
In the second set of experiments, we remove an unrealistic constraint on our images
- that one end of the object must always fall in one half of the image. Instead
we assume that there is a feature-extraction process that finds instances of simple
features in the image and passes on to the module a set of parameters describing
the position, orientation and spatial extent of each feature. This is a reasonable
assumption, since low-level vision is generally good at providing accurate descriptions of simple features that are present in an image (such as edges and corners),
and can also specify their locations.
In these experiments, the feature-extraction program finds instances of two features
of the letter y - the upper u-shaped curve and the long vertical stroke with a curved
tail. The recognition module then tries to extract consistent object instantiation
parameters from these feature instantiation parameters by maximizing the same
mutual information objective as before.
There are several advantages of this second scheme. The first set of training instances were artificially restricted by the requirement that one end must appear in
the left half of the image, and the other in the right half. Now since a separate
process is analyzing the entire image to find a feature of a given type, we can use
the entire space of possible instantiation parameters in the training set. With the
simpler architecture, we can efficiently handle more complex images. In addition, no
hidden layer is necessary - the mapping from the features' instantiation parameters
to the object's instantiation parameters is linear.
Using this scheme, only twelve sweeps through 1000 training cases are necessary
303
304
Zemel and Hinton
to optimize the objective function. The speed-up is likely due to the fact that the
input is already parameterized in an appropriate form for the extraction of the
instantiation parameters. This method also produces robust recognition modules,
which reject instances where the relationships between the two input vectors does
not match the relationship in the training set. We test this robustness by adding
noise of varying magnitudes separately to each component of the input vectors, and
measuring the Z-scores of the output vectors. As expected, the agreement between
the two outputs of a module degrades smoothly with added noise.
5
COMPETITIVE IMAX
We are currently working on extending this idea to handle multiple shapes. The
obvious way to do this using modules of the type described above is to force the
modules to specialize by training each module separately on images of a particular shape, and then to recognize shapes by giving the image to each module and
seeing which module achieves the lowest Z-score. However, this requires supervised
training in which the images are labelled by the type of object they contain. We
are exploring an entirely unsupervised method in which images are unlabelled, and
every image is processed by many competing modules.
Each competing module has a responsibility for each image that depends on the
consistency between the two output vectors of the module. The responsibilities are
normalized so that, for each image, they sum to one. In computing the covariances
for a particular module in Equation 1, we weight each training case by the module's
responsibility for that case. We also compute an overall mixing proportion, 7rm ,
for each module which is just the avera.ge of its responsibilities. We extend the
objective function I to multiple modules as follows:
(3)
m
We could compute the relative responsibilities of modules by comparing their Zscores, but this would lead to a recurrent relationship between the responsibilities
and the weights within a module. To avoid this recurrence, we simply store the
responsibility of each module for each training case. We optimize r by interleaving
updates of the weights within each module, with updates of the stored responsibilities. This learning is a sophisticated form of competitive learning. Rather than
clustering together input vectors that are close to one another in the input space,
the modules cluster together input vectors that share a common spatial relationship
between their two halves.
In our experiments, we are using just two modules and an ensemble of images of two
different shapes (either a g or a y in each image). We have found that the system can
cluster the images with a little bootstrapping. We initially split the training set into
g-images and y-images, and train up one module for several iterations on one set of
images, and the other module on the other set. When we then use a new training set
containing 500 images of each shape, and train both modules competitively on the
full set, the system successfully learns to separate the images so that the modules
each specialize in a particular shape. After the bootstrapping, one module wins on
297 cases of one shape and 206 cases of the other shape. After further learning on
Discovering Viewpoint-Invariant Relationships That Characterize Objects
the unlabelled mixture of shapes, it wins on 498 cases of one shape and 0 cases of
the other.
By making another assumption, that the input images in the training set are temporally coherent, we should be able to eliminate the need for the bootstrapping
procedure. If we assume that the training images come in runs of one class, and
then another, as would be the case if they were a sequence of images of various
moving objects, then for each module, we can attempt to maximize the mutual
information between the responsibilities it assigns to consecutive training images.
We can augment the objective function r by adding this temporal coherence term
onto the spatial coherence term, and our network should cluster the input set into
different shapes while simultaneously learning how to recognize them.
Finally, we plan to extend our model to become a more general recognition system.
Since the learning relatively is fast, we should also be able to build a hierarchy of
modules that could learn to recognize more complex objects.
Acknowledgements
We thank Sue Becker and Steve Nowlan for helpful discussions. This research was supported by grants from the Ontario Information Technology Research Center, the Natural
Sciences and Engineering Research Council, and Apple Computer, Inc. Hinton is the
Noranda Fellow of the Canadian Institute for Advanced Research.
References
Ballard, D. H. (1981). Generalizing the Hough transform to detect arbitrary shapes.
Pattern Recognition, 13(2):111-122 .
Becker, S. and Hinton, G. E. (1989). Spatial coherence as an internal teacher for a
neural network. Technical Report Technical Report CRG-TR-89-7, University
of Toronto.
Hinton, G. E. (1981). A parallel computation that assigns canonical object-based
frames of reference. In Proceedings of the 7th International Joint Conference
on Artificial Intelligence, pages 683- 685, Vancouver, BC, Canada.
Hinton, G. E. and Becker, S. (1990) . An unsupervised learning procedure that discovers surfaces in random-dot stereograms. In Proceedings of the International
Joint Conference on Neural Networks, volume 1, pages 218-222, Hillsdale, NJ.
Erlbaum.
Lowe, D. G. (1985). Perceptual Organization and Visual Recognition. Kluwer Academic Publishers, Boston.
Lowe, D. G. (1987). The viewpoint consistency constraint. International Journal
of Computer Vision, 1:57-72.
Roberts, L. G. (1965). Machine perception of three-dimensional solids. In Tippett,
J. T., editor, Optical and Electro-Optical Information Processing. MIT Press.
Zemel, R. S., Mozer, M. C., and Hinton, G. E. (1989). TRAFFIC: Object recognition using hierarchical reference frame transformations. In Touretzky, D. S.,
editor, Advances in Neural Information Processing Systems 2, pages 266-273.
Morgan Kaufmann, San Mateo, CA.
305
| 329 |@word determinant:2 version:3 proportion:1 simulation:1 covariance:2 tr:1 solid:1 contains:3 fragment:2 score:6 bc:1 current:1 comparing:2 nowlan:1 yet:1 must:5 additive:1 realistic:1 shape:18 remove:1 plot:1 update:2 half:26 discovering:4 intelligence:1 toronto:3 location:1 simpler:1 become:1 specialize:2 expected:1 presumed:1 ont:1 little:1 becomes:1 discover:3 underlying:3 lowest:1 what:1 kind:2 transformation:4 bootstrapping:3 nj:1 temporal:1 fellow:1 every:1 act:1 rm:1 unit:16 grant:1 appear:1 positive:1 before:1 engineering:1 analyzing:1 black:1 mateo:1 procedure:10 area:1 reject:2 significantly:2 matching:1 radial:3 seeing:1 onto:1 close:1 twodimensional:1 optimize:2 center:1 maximizing:2 independently:2 assigns:2 imax:3 his:1 handle:2 hierarchy:1 us:1 agreement:3 assure:1 recognition:14 role:1 module:49 capture:2 connected:1 mozer:2 stereograms:1 trained:6 basis:4 joint:2 represented:3 various:2 train:3 fast:1 describe:3 artificial:1 zemel:6 quite:1 modular:1 valued:1 solve:1 transform:2 noisy:2 la4:1 blob:1 advantage:1 sequence:1 outputting:1 mixing:1 ontario:1 description:1 cluster:4 requirement:1 extending:1 produce:4 object:69 recurrent:1 predicted:2 come:1 correct:2 hillsdale:1 fix:1 crg:1 extension:1 exploring:1 mapping:1 predict:2 vary:1 achieves:1 consecutive:1 currently:1 council:1 agrees:1 successfully:1 mit:1 gaussian:4 always:3 rather:1 avoid:1 varying:1 indicates:2 detect:1 helpful:1 dependent:1 rigid:5 entire:2 eliminate:1 initially:1 hidden:1 relation:3 overall:1 orientation:11 augment:1 plan:1 spatial:7 constrained:1 mutual:6 extraction:3 shaped:1 sampling:1 represents:1 unsupervised:7 connectionist:1 report:2 richard:1 randomly:1 composed:1 simultaneously:1 recognize:8 individual:1 attempt:1 freedom:3 organization:1 mixture:3 devoted:1 accurate:1 edge:1 necessary:2 hough:1 circle:1 instance:10 measuring:1 addressing:1 erlbaum:1 characterize:4 stored:1 connect:1 teacher:1 combined:1 twelve:1 international:3 connecting:1 together:2 containing:2 corner:1 star:1 inc:1 scissors:1 depends:1 try:4 lowe:4 responsibility:9 traffic:6 characterizes:1 competitive:2 parallel:1 rbfs:1 il:2 formed:1 variance:4 kaufmann:1 efficiently:1 ensemble:6 produced:1 worth:1 apple:1 m5s:1 stroke:1 touretzky:1 obvious:1 di:4 adjusting:1 sophisticated:1 steve:1 supervised:3 specify:2 just:3 working:1 nonlinear:1 indicated:1 building:1 effect:1 contain:2 normalized:1 avera:1 during:1 recurrence:1 generalized:1 image:49 discovers:1 common:1 rotation:1 volume:1 tail:1 extend:2 kluwer:1 interpret:1 significant:1 consistency:7 dot:1 moving:1 supervision:1 surface:1 showed:1 store:1 morgan:1 recognized:1 maximize:4 signal:7 relates:1 full:2 multiple:2 technical:2 unlabelled:4 match:2 academic:1 long:1 tippett:1 prediction:11 vision:2 sue:1 iteration:1 represent:2 normalization:1 achieved:1 addition:1 separately:2 publisher:1 pass:1 electro:1 extracting:1 intermediate:1 split:1 canadian:1 variety:1 architecture:2 competing:3 idea:2 becker:5 generally:2 processed:1 canonical:1 four:1 nevertheless:1 sum:2 run:1 letter:1 parameterized:1 reasonable:1 decide:1 coherence:3 bit:1 entirely:1 layer:3 played:1 constraint:12 constrain:1 generates:1 speed:1 optical:2 relatively:1 department:1 belonging:1 conjugate:1 across:3 making:1 invariant:7 restricted:1 equation:1 agree:4 discus:1 describing:1 ge:1 end:10 competitively:1 hierarchical:1 appropriate:2 robustness:1 clustering:1 giving:1 build:1 sweep:3 objective:7 already:1 added:1 degrades:1 gradient:1 win:2 separate:3 thank:1 extent:1 relationship:13 providing:2 difficult:3 unfortunately:1 robert:2 perform:1 disagree:2 upper:1 vertical:1 curved:1 hinton:13 frame:5 perturbation:1 arbitrary:1 intensity:3 canada:1 specified:1 coherent:1 learned:1 able:4 pattern:2 perception:1 program:1 unrealistic:1 suitable:1 natural:1 force:1 advanced:1 scheme:2 technology:1 temporally:1 extract:2 acknowledgement:1 discouraging:1 vancouver:1 relative:1 geoffrey:1 degree:4 consistent:8 viewpoint:12 editor:2 share:1 translation:1 supported:1 telling:1 institute:1 fall:1 recoding:2 curve:1 made:2 adaptive:1 san:1 instantiation:31 noranda:1 learn:7 ballard:2 robust:1 ca:1 complex:2 artificially:1 significance:1 linearly:1 whole:2 noise:3 position:11 perceptual:1 learns:3 interleaving:1 showing:1 constellation:1 experimented:1 intrinsic:1 adding:2 magnitude:1 boston:1 smoothly:2 generalizing:1 simply:1 explore:1 likely:1 visual:1 expressed:1 extracted:1 goal:2 rbf:1 labelled:1 invariance:1 experimental:1 indicating:1 internal:2 meant:1 tested:1 |
2,525 | 3,290 | Temporal Difference Updating
without a Learning Rate
Marcus Hutter
RSISE@ANU and SML@NICTA
Canberra, ACT, 0200, Australia
marcus@hutter1.net www.hutter1.net
Shane Legg
IDSIA, Galleria 2, Manno-Lugano CH-6928, Switzerland
shane@vetta.org www.vetta.org/shane
Abstract
We derive an equation for temporal difference learning from statistical principles.
Specifically, we start with the variational principle and then bootstrap to produce
an updating rule for discounted state value estimates. The resulting equation is
similar to the standard equation for temporal difference learning with eligibility traces, so called TD(?), however it lacks the parameter ? that specifies the
learning rate. In the place of this free parameter there is now an equation for the
learning rate that is specific to each state transition. We experimentally test this
new learning rule against TD(?) and find that it offers superior performance in
various settings. Finally, we make some preliminary investigations into how to
extend our new temporal difference algorithm to reinforcement learning. To do
this we combine our update equation with both Watkins? Q(?) and Sarsa(?) and
find that it again offers superior performance without a learning rate parameter.
1
Introduction
In the field of reinforcement learning, perhaps the most popular way to estimate the future discounted
reward of states is the method of temporal difference learning. It is unclear who exactly introduced
this first, however the first explicit version of temporal difference as a learning rule appears to be
Witten [9]. The idea is as follows: The expected future discounted reward of a state s is,
V s := E rk + ?rk+1 + ? 2 rk+2 + ? ? ? |sk = s ,
where the rewards rk , rk+1 , . . . are geometrically discounted into the future by ? < 1. From this
definition it follows that,
V s = E rk + ?V sk+1 |sk = s .
(1)
Our task, at time t, is to compute an estimate Vst of V s for each state s. The only information we
have to base this estimate on is the current history of state transitions, s1 , s2 , . . . , st , and the current
history of observed rewards, r1 , r2 , . . . , rt . Equation (1) suggests that at time t + 1 the value of
rt + ?Vst+1 provides us with information on what Vst should be: If it is higher than Vstt then perhaps
this estimate should be increased, and vice versa. This intuition gives us the following estimation
heuristic for state st ,
Vst+1
:= Vstt + ? rt + ?Vstt+1 ? Vstt ,
t
where ? is a parameter that controls the rate of learning. This type of temporal difference learning
is known as TD(0).
1
One shortcoming of this method is that at each time step the value of only the last state st is updated.
States before the last state are also affected by changes in the last state?s value and thus these could
be updated too. This is what happens with so called temporal difference learning with eligibility
traces, where a history, or trace, is kept of which states have been recently visited. Under this
method, when we update the value of a state we also go back through the trace updating the earlier
states as well. Formally, for any state s its eligibility trace is computed by,
??Est?1
if s 6= st ,
t
Es :=
??Est?1 + 1 if s = st ,
where ? is used to control the rate at which the eligibility trace is discounted. The temporal difference update is then, for all states s,
Vst+1 := Vst + ?Est r + ?Vstt+1 ? Vstt .
(2)
This more powerful version of temporal different learning is known as TD(?) [7].
The main idea of this paper is to derive a temporal difference rule from statistical principles and
compare it to the standard heuristic described above. Superficially, our work has some similarities
to LSTD(?) ([2] and references therein). However LSTD is concerned with finding a least-squares
linear function approximation, it has not yet been developed for general ? and ?, and has update time
quadratic in the number of features/states. On the other hand, our algorithm ?exactly? coincides
with TD/Q/Sarsa(?) for finite state spaces, but with a novel learning rate derived from statistical
principles. We therefore focus our comparison on TD/Q/Sarsa. For a recent survey of methods to
set the learning rate see [1].
In Section 2 we derive a least squares estimate for the value function. By expressing the estimate as
an incremental update rule we obtain a new form of TD(?), which we call HL(?). In Section 3 we
compare HL(?) to TD(?) on a simple Markov chain. We then test it on a random Markov chain in
Section 4 and a non-stationary environment in Section 5. In Section 6 we derive two new methods
for policy learning based on HL(?), and compare them to Sarsa(?) and Watkins? Q(?) on a simple
reinforcement learning problem. Section 7 ends the paper with a summary and some thoughts on
future research directions.
2
Derivation
The empirical future discounted reward of a state sk is the sum of actual rewards following from
state sk in time steps k, k + 1, . . ., where the rewards are discounted as they go into the future.
Formally, the empirical value of state sk at time k for k = 1, ..., t is,
?
X
vk :=
? u?k ru ,
(3)
u=k
where the future rewards ru are geometrically discounted by ? < 1. In practice the exact value of
vk is always unknown to us as it depends not only on rewards that have been already observed, but
also on unknown future rewards. Note that if sm = sn for m 6= n, that is, we have visited the same
state twice at different times m and n, this does not imply that vn = vm as the observed rewards
following the state visit may be different each time.
Our goal is that for each state s the estimate Vst should be as close as possible to the true expected
future discounted reward V s . Thus, for each state s we would like Vs to be close to vk for all k such
that s = sk . Furthermore, in non-stationary environments we would like to discount old evidence
by some parameter ? ? (0, 1]. Formally, we want to minimise the loss function,
t
2
1 X t?k
L :=
?
vk ? Vstk .
(4)
2
k=1
For stationary environments we may simply set ? = 1 a priori.
As we wish to minimise this loss, we take the partial derivative with respect to the value estimate of
each state and set to zero,
t
t
t
X
X
X
?L
t?k
t
t
t?k
=
?
?
v
?
V
?
=
V
?
?
?
?t?k ?sk s vk = 0,
k
sk s
sk sk s
s
?Vst
k=1
k=1
2
k=1
where we could change Vstk into Vst due to the presence of the Kronecker ?sk s , defined ?xy := 1 if
Pt
x = y, and 0 otherwise. By defining a discounted state visit counter Nst := k=1 ?t?k ?sk s we get
Vst Nst =
t
X
?t?k ?sk s vk .
(5)
k=1
Since vk depends on future rewards rk , Equation (5) can not be used in its current form. Next we
note that vk has a self-consistency property with respect to the rewards. Specifically, the tail of the
future discounted reward sum for each state depends on the empirical value at time t in the following
way,
t?1
X
vk =
? u?k ru + ? t?k vt .
u=k
Substituting this into Equation (5) and exchanging the order of the double sum,
Vst Nst
=
t?1 X
u
X
?t?k ?sk s ? u?k ru +
u=1 k=1
=
=
Est
Pt
t?1
X
?t?u
u=1
Rst +
t
X
?t?k ?sk s ? t?k vt
k=1
u
X
(??)u?k ?sk s ru +
k=1
t
Es v t ,
t
X
(??)t?k ?sk s vt
k=1
t?k
where
:= k=1 (??) ?sk s is the eligibility trace of state s, and Rst :=
the discounted reward with eligibility.
Pt?1
u=1
?t?u Esu ru is
Est and Rst depend only on quantities known at time t. The only unknown quantity is vt , which we
have to replace with our current estimate of this value at time t, which is Vstt . In other words, we
bootstrap our estimates. This gives us,
Vst Nst = Rst + Est Vstt .
(6)
For state s = st , this simplifies to Vstt = Rst t /(Nstt ? Estt ). Substituting this back into Equation (6)
we obtain,
Rt
(7)
Vst Nst = Rst + Est t st t .
N s t ? Es t
This gives us an explicit expression for our V estimates. However, from an algorithmic perspective
an incremental update rule is more convenient. To derive this we make use of the relations,
Nst+1 = ?Nst + ?st+1 s ,
Est+1 = ??Est + ?st+1 s ,
Rst+1 = ?Rst + ?Est rt ,
with Ns0 = Es0 = Rs0 = 0. Inserting these into Equation (7) with t replaced by t + 1,
Vst+1 Nst+1
= Rst+1 + Est+1
Rst+1
t+1
Nst+1
t+1
? Est+1
t+1
t
R
+ Estt+1 rt
t+1 st+1
= ?Rst + ?Est rt + Es
Nstt+1 ? ?Estt+1
.
By solving Equation (6) for Rst and substituting back in,
Nst Vst ? Estt+1 Vstt + Estt+1 rt
Vst+1 Nst+1 = ? Vst Nst ? Est Vstt + ?Est rt + Est+1 t+1 t+1t
Nst+1 ? ?Estt+1
= ?Nst + ?st+1 s Vst ? ?st+1 s Vst ? ?Est Vstt + ?Est rt
+ Est+1
Nstt+1 Vstt+1 ? Estt+1 Vstt + Estt+1 rt
Nstt+1 ? ?Estt+1
Dividing through by Nst+1 (= ?Nst + ?st+1 s ),
Vst+1 = Vst +
??st+1 s Vst ? ?Est Vstt + ?Est rt
?Nst + ?st+1 s
3
.
+
(??Est + ?st+1 s )(Nstt+1 Vstt+1 ? Estt+1 Vstt + Estt+1 rt )
(Nstt+1 ? ?Estt+1 )(?Nst + ?st+1 s )
.
Making the first denominator the same as the second, then expanding the numerator,
Vst+1 = Vst +
+
?Est rt Nstt+1 ? ?Est Vstt Nstt+1 ? ?st+1 s Vst Nstt+1 ? ??Estt+1 Est rt
(Nstt+1 ? ?Estt+1 )(?Nst + ?st+1 s )
??Estt+1 Est Vstt + ?Estt+1 Vst ?st+1 s + ??Est Nstt+1 Vstt+1 ? ??Est Estt+1 Vstt
+
(Nstt+1 ? ?Estt+1 )(?Nst + ?st+1 s )
??Est Estt+1 rt + ?st+1 s Nstt+1 Vstt+1 ? ?st+1 s Estt+1 Vstt + ?st+1 s Estt+1 rt
(Nstt+1 ? ?Estt+1 )(?Nst + ?st+1 s )
.
After cancelling equal terms (keeping in mind that in every term with a Kronecker ?xy factor we
may assume that x = y as the term is always zero otherwise), and factoring out Est we obtain,
Est ?rt Nstt+1 ? ?Vstt Nstt+1 + ?Vst ?st+1 s + ??Nstt+1 Vstt+1 ? ?st+1 s Vstt + ?st+1 s rt
t+1
t
Vs = Vs +
(Nstt+1 ? ?Estt+1 )(?Nst + ?st+1 s )
Finally, by factoring out ?Nstt+1 + ?st+1 s we obtain our update rule,
Vst+1 = Vst + Est ?t (s, st+1 ) rt + ?Vstt+1 ? Vstt ,
(8)
where the learning rate is given by,
?t (s, st+1 ) :=
Nstt+1
1
.
Nstt+1 ? ?Estt+1 Nst
(9)
Examining Equation (8), we find the usual update equation for temporal difference learning with eligibility traces (see Equation (2)), however the learning rate ? has now been replaced by ?t (s, st+1 ).
This learning rate was derived from statistical principles by minimising the squared loss between
the estimated and true state value. In the derivation we have exploited the fact that the latter must be
self-consistent and then bootstrapped to get Equation (6). This gives us an equation for the learning
rate for each state transition at time t, as opposed to the standard temporal difference learning where
the learning rate ? is either a fixed free parameter for all transitions, or is decreased over time by
some monotonically decreasing function. In either case, the learning rate is not automatic and must
be experimentally tuned for good performance. The above derivation appears to theoretically solve
this problem.
The first term in ?t seems to provide some type of normalisation to the learning rate, though the
intuition behind this is not clear to us. The meaning of second term however can be understood as
follows: Nst measures how often we have visited state s in the recent past. Therefore, if Nst ?
Nstt+1 then state s has a value estimate based on relatively few samples, while state st+1 has a
value estimate based on relatively many samples. In such a situation, the second term in ?t boosts
the learning rate so that Vst+1 moves more aggressively towards the presumably more accurate
rt + ?Vstt+1 . In the opposite situation when st+1 is a less visited state, we see that the reverse occurs
and the learning rate is reduced in order to maintain the existing value of Vs .
3
A simple Markov process
For our first test we consider a simple Markov process with 51 states. In each step the state number
is either incremented or decremented by one with equal probability, unless the system is in state 0
or 50 in which case it always transitions to state 25 in the following step. When the state transitions
from 0 to 25 a reward of 1.0 is generated, and for a transition from 50 to 25 a reward of -1.0 is
generated. All other transitions have a reward of 0. We set the discount value ? = 0.99 and then
computed the true discounted value of each state by running a brute force Monte Carlo simulation.
We ran our algorithm 10 times on the above Markov chain and computed the root mean squared
error in the value estimate across the states at each time step averaged across each run. The optimal
4
0.40
0.40
HL(1.0)
TD(0.9) a = 0.1
TD(0.9) a = 0.2
0.30
0.30
0.25
0.25
0.20
0.20
0.15
0.15
0.10
0.10
0.05
0.0
0.5
1.0
Time
1.5
HL(1.0)
TD(0.9) a = 8.0/sqrt(t)
TD(0.9) a = 2.0/cbrt(t)
0.35
RMSE
RMSE
0.35
0.05
0.0
2.0
x1e+4
Figure 1: 51 state Markov process averaged over
10 runs. The parameter a is the learning rate ?.
0.5
1.0
Time
1.5
2.0
x1e+4
Figure 2: 51 state Markov process averaged over
300 runs.
value of ? for HL(?) was 1.0, which was to be expected given that the environment is stationary and
thus discounting old experience is not helpful.
For TD(?) we tried various different learning rates and values of ?. We could find no settings where
TD(?) was competitive with HL(?). If the learning rate ? was set too high the system would learn
as fast as HL(?) briefly before becoming stuck. With a lower learning rate the final performance
was improved, however the initial performance was now much worse than HL(?). The results of
these tests appear in Figure 1.
Similar tests were performed with larger and smaller Markov chains, and with different values of ?.
HL(?) was consistently superior to TD(?) across these tests. One wonders whether this may be due
to the fact that the implicit learning rate that HL(?) uses is not fixed. To test this we explored the
performance of a number of different learning rate functions on the 51 state Markov chain described
above. We found that functions of the form ?t always performed poorly, however good performance
?
was possible by setting ? correctly for functions of the form ??t and ?
3 . As the results were much
t
closer, we averaged over 300 runs. These results appear in Figure 2.
With a variable learning rate TD(?) is performing much better, however we were still unable to find
an equation that reduced the learning rate in such a way that TD(?) would outperform HL(?). This
is evidence that HL(?) is adapting the learning rate optimally without the need for manual equation
tuning.
4
Random Markov process
To test on a Markov process with a more complex transition structure, we created a random 50
state Markov process. We did this by creating a 50 by 50 transition matrix where each element was
set to 0 with probability 0.9, and a uniformly random number in the interval [0, 1] otherwise. We
then scaled each row to sum to 1. Then to transition between states we interpreted the ith row as a
probability distribution over which state follows state i. To compute the reward associated with each
transition we created a random matrix as above, but without normalising. We set ? = 0.9 and then
ran a brute force Monte Carlo simulation to compute the true discounted value of each state.
The ? parameter for HL(?) was simply set to 1.0 as the environment is stationary. For TD we
experimented with a range of parameter settings and learning rate decrease functions. We found that
?
a fixed learning rate of ? = 0.2, and a decreasing rate of 1.5
performed reasonable well, but never
3
t
as well as HL(?). The results were generated by averaging over 10 runs, and are shown in Figure 3.
Although the structure of this Markov process is quite different to that used in the previous experiment, the results are again similar: HL(?) preforms as well or better than TD(?) from the beginning
to the end of the run. Furthermore, stability in the error towards the end of the run is better with
HL(?) and no manual learning tuning was required for these performance gains.
5
0.7
0.30
HL(1.0)
TD(0.9) a = 0.2
TD(0.9) a = 1.5/cbrt(t)
0.6
0.20
0.4
RMSE
RMSE
0.5
0.3
0.05
0.1
1000
2000
3000
4000
0.00
0.0
5000
Time
Figure 3: Random 50 state Markov process. The
parameter a is the learning rate ?.
5
0.15
0.10
0.2
0.0
0
HL(0.9995)
TD(0.8) a = 0.05
TD(0.9) a = 0.05
0.25
0.5
1.0
Time
1.5
2.0
x1e+4
Figure 4: 21 state non-stationary Markov process.
Non-stationary Markov process
The ? parameter in HL(?), introduced in Equation (4), reduces the importance of old observations
when computing the state value estimates. When the environment is stationary this is not useful and
so we can set ? = 1.0, however in a non-stationary environment we need to reduce this value so that
the state values adapt properly to changes in the environment. The more rapidly the environment is
changing, the lower we need to make ? in order to more rapidly forget old observations.
To test HL(?) in such a setting, we used the Markov chain from Section 3, but reduced its size to
21 states to speed up convergence. We used this Markov chain for the first 5,000 time steps. At that
point, we changed the reward when transitioning from the last state to middle state to from -1.0 to
be 0.5. At time 10,000 we then switched back to the original Markov chain, and so on alternating
between the models of the environment every 5,000 steps. At each switch, we also changed the
target state values that the algorithm was trying to estimate to match the current configuration of the
environment. For this experiment we set ? = 0.9.
As expected, the optimal value of ? for HL(?) fell from 1 down to about 0.9995. This is about what
we would expect given that each phase is 5,000 steps long. For TD(?) the optimal value of ? was
around 0.8 and the optimum learning rate was around 0.05. As we would expect, for both algorithms
when we pushed ? above its optimal value this caused poor performance in the periods following
each switch in the environment (these bad parameter settings are not shown in the results). On the
other hand, setting ? too low produced initially fast adaption to each environment switch, but poor
performance after that until the next environment change. To get accurate statistics we averaged
over 200 runs. The results of these tests appear in Figure 4.
For some reason HL(0.9995) learns faster than TD(0.8) in the first half of the first cycle, but only
equally fast at the start of each following cycle. We are not sure why this is happening. We could
improve the initial speed at which HL(?) learnt in the last three cycles by reducing ?, however that
comes at a performance cost in terms of the lowest mean squared error attained at the end of each
cycle. In any case, in this non-stationary situation HL(?) again performed well.
6
Windy Gridworld
Reinforcement learning algorithms such as Watkins? Q(?) [8] and Sarsa(?) [5, 4] are based on
temporal difference updates. This suggests that new reinforcement learning algorithms based on
HL(?) should be possible.
For our first experiment we took the standard Sarsa(?) algorithm and modified it in the obvious way
to use an HL temporal difference update. In the presentation of this algorithm we have changed
notation slightly to make things more consistent with that typical in reinforcement learning. Specifically, we have dropped the t super script as this is implicit in the algorithm specification, and have
6
Algorithm 1 HLS(?)
Initialise Q(s, a) = 0, N (s, a) = 1 and E(s, a) = 0 for all s, a
Initialise s and a
repeat
Take action a, observed r, s?
Choose a? by using ?-greedy selection on Q(s? , ?)
? ? r + ?Q(s? , a? ) ? Q(s, a)
E(s, a) ? E(s, a) + 1
N (s, a) ? N (s, a) + 1
for all s, a do
N (s? ,a? )
1
?((s, a), (s? , a? )) ? N (s? ,a? )??E(s
? ,a? )
N (s,a)
end for
for all s, a do
Q(s, a) ? Q(s, a) + ? (s, a), (s? , a? ) E(s, a)?
E(s, a) ? ??E(s, a)
N (s, a) ? ?N (s, a)
end for
s ? s? ; a ? a?
until end of run
defined Q(s, a) := V(s,a) , E(s, a) := E(s,a) and N (s, a) := N(s,a) . Our new reinforcement learning algorithm, which we call HLS(?) is given in Algorithm 1. Essentially the only changes to the
standard Sarsa(?) algorithm have been to add code to compute the visit counter N (s, a), add a loop
to compute the ? values, and replace ? with ? in the temporal difference update.
To test HLS(?) against standard Sarsa(?) we used the Windy Gridworld environment described on
page 146 of [6]. This world is a grid of 7 by 10 squares that the agent can move through by going
either up, down, left of right. If the agent attempts to move off the grid it simply stays where it is.
The agent starts in the 4th row of the 1st column and receives a reward of 1 when it finds its way to
the 4th row of the 8th column. To make things more difficult, there is a ?wind? blowing the agent
up 1 row in columns 4, 5, 6, and 9, and a strong wind of 2 in columns 7 and 8. This is illustrated in
Figure 5. Unlike in the original version, we have set up this problem to be a continuing discounted
task with an automatic transition from the goal state back to the start state.
We set ? = 0.99 and in each run computed the empirical future discounted reward at each point in
time. As this value oscillated we also ran a moving average through these values with a window
length of 50. Each run lasted for 50,000 time steps as this allowed us to see at what level each
learning algorithm topped out. These results appear in Figure 6 and were averaged over 500 runs to
get accurate statistics.
Despite putting considerable effort into tuning the parameters of Sarsa(?), we were unable to achieve
a final future discounted reward above 5.0. The settings shown on the graph represent the best final
value we could achieve. In comparison HLS(?) easily beat this result at the end of the run, while
being slightly slower than Sarsa(?) at the start. By setting ? = 0.99 we were able to achieve the
same performance as Sarsa(?) at the start of the run, however the performance at the end of the run
was then only slightly better than Sarsa(?). This combination of superior performance and fewer
parameters to tune suggest that the benefits of HL(?) carry over into the reinforcement learning
setting.
Another popular reinforcement learning algorithm is Watkins? Q(?). Similar to Sarsa(?) above, we
simply inserted the HL(?) temporal difference update into the usual Q(?) algorithm in the obvious
way. We call this new algorithm HLQ(?)(not shown). The test environment was exactly the same as
we used with Sarsa(?) above.
The results this time were more competitive (these results are not shown). Nevertheless, despite
spending a considerable amount of time fine tuning the parameters of Q(?), we were unable to beat
HLQ(?). As the performance advantage was relatively modest, the main benefit of HLQ(?) was
that it achieved this level of performance without having to tune a learning rate.
7
Future Discounted Reward
6
5
4
3
2
1
HLS(0.995) e = 0.003
Sarsa(0.5) a = 0.4 e = 0.005
0
0
1
2
3
Time
Figure 5: [Windy Gridworld] S marks the start
state and G the goal state, at which the agent
jumps back to S with a reward of 1.
7
4
5
x1e+4
Figure 6: Sarsa(?) vs. HLS(?) in the Windy
Gridworld. Performance averaged over 500 runs.
On the graph, e represents the exploration parameter ?, and a the learning rate ?.
Conclusions
We have derived a new equation for setting the learning rate in temporal difference learning with
eligibility traces. The equation replaces the free learning rate parameter ?, which is normally experimentally tuned by hand. In every setting tested, be it stationary Markov chains, non-stationary
Markov chains or reinforcement learning, our new method produced superior results.
To further our theoretical understanding, the next step would be to try to prove that the method
converges to correct estimates. This can be done for TD(?) under certain assumptions on how the
learning rate decreases over time. Hopefully, something similar can be proven for our new method.
In terms of experimental results, it would be interesting to try different types of reinforcement learning problems and to more clearly identify where the ability to set the learning rate differently for
different state transition pairs helps performance. It would also be good to generalise the result to
episodic tasks. Finally, just as we have successfully merged HL(?) with Sarsa(?) and Watkins?
Q(?), we would also like to see if the same can be done with Peng?s Q(?) [3], and perhaps other
reinforcement learning algorithms.
Acknowledgements
This research was funded by the Swiss NSF grant 200020-107616.
References
[1] A. P. George and W. B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Journal of Machine Learning, 65(1):167?198, 2006.
[2] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research,
4:1107?1149, 2003.
[3] J. Peng and R. J. Williams. Increamental multi-step Q-learning. Machine Learning, 22:283?290, 1996.
[4] G. A. Rummery. Problem solving with reinforcement learning. PhD thesis, Cambridge University, 1995.
[5] G. A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technial Report
CUED/F-INFENG/TR 166, Engineering Department, Cambridge University, 1994.
[6] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998.
[7] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9?44, 1988.
[8] C.J.C.H Watkins. Learning from Delayed Rewards. PhD thesis, King?s College, Oxford, 1989.
[9] I. H. Witten. An adaptive optimal controller for discrete-time markov environments. Information and
Control, 34:286?295, 1977.
8
| 3290 |@word briefly:1 version:3 middle:1 seems:1 simulation:2 tried:1 tr:1 carry:1 initial:2 configuration:1 tuned:2 bootstrapped:1 past:1 existing:1 current:5 yet:1 must:2 update:12 v:5 stationary:12 half:1 greedy:1 fewer:1 beginning:1 ith:1 normalising:1 provides:1 org:2 prove:1 combine:1 theoretically:1 peng:2 expected:4 multi:1 discounted:18 decreasing:2 td:26 actual:1 es0:1 window:1 notation:1 lowest:1 what:4 interpreted:1 developed:1 finding:1 temporal:19 every:3 act:1 exactly:3 scaled:1 control:3 brute:2 normally:1 grant:1 appear:4 before:2 understood:1 dropped:1 engineering:1 despite:2 sutton:2 oxford:1 becoming:1 twice:1 therein:1 suggests:2 range:1 averaged:7 practice:1 recursive:1 swiss:1 bootstrap:2 powell:1 episodic:1 empirical:4 thought:1 adapting:1 convenient:1 word:1 suggest:1 get:4 close:2 selection:1 www:2 go:2 williams:1 survey:1 oscillated:1 rule:7 initialise:2 stability:1 updated:2 pt:3 target:1 exact:1 programming:1 us:1 element:1 idsia:1 updating:3 observed:4 inserted:1 cycle:4 counter:2 incremented:1 decrease:2 ran:3 intuition:2 environment:17 reward:27 dynamic:1 depend:1 solving:2 manno:1 easily:1 differently:1 various:2 derivation:3 fast:3 shortcoming:1 monte:2 quite:1 heuristic:2 larger:1 solve:1 otherwise:3 ability:1 statistic:2 final:3 advantage:1 net:2 took:1 cancelling:1 inserting:1 loop:1 rapidly:2 poorly:1 achieve:3 rst:12 convergence:1 double:1 optimum:1 r1:1 produce:1 incremental:2 converges:1 help:1 derive:5 cued:1 strong:1 dividing:1 come:1 switzerland:1 direction:1 merged:1 correct:1 exploration:1 australia:1 preliminary:1 investigation:1 sarsa:17 around:2 presumably:1 algorithmic:1 predict:1 parr:1 substituting:3 estimation:2 visited:4 vice:1 successfully:1 mit:1 clearly:1 always:4 super:1 modified:1 stepsizes:1 barto:1 derived:3 focus:1 legg:1 vk:9 consistently:1 properly:1 lasted:1 helpful:1 factoring:2 initially:1 relation:1 going:1 priori:1 field:1 equal:2 never:1 having:1 represents:1 future:14 connectionist:1 decremented:1 report:1 few:1 delayed:1 replaced:2 phase:1 maintain:1 attempt:1 normalisation:1 behind:1 chain:10 accurate:3 closer:1 partial:1 xy:2 experience:1 modest:1 unless:1 old:4 continuing:1 preforms:1 theoretical:1 hutter:1 increased:1 column:4 earlier:1 exchanging:1 cost:1 wonder:1 examining:1 too:3 optimally:1 learnt:1 st:36 stay:1 vm:1 off:1 again:3 squared:3 thesis:2 opposed:1 choose:1 worse:1 creating:1 derivative:1 sml:1 caused:1 depends:3 vst:30 try:2 root:1 performed:4 script:1 wind:2 start:7 competitive:2 rmse:4 square:4 topped:1 who:1 identify:1 produced:2 carlo:2 history:3 sqrt:1 manual:2 definition:1 against:2 obvious:2 associated:1 galleria:1 gain:1 popular:2 blowing:1 back:6 appears:2 higher:1 attained:1 improved:1 done:2 though:1 furthermore:2 just:1 implicit:2 until:2 hand:3 receives:1 hopefully:1 lack:1 perhaps:3 true:4 discounting:1 aggressively:1 alternating:1 illustrated:1 numerator:1 self:2 eligibility:8 coincides:1 trying:1 meaning:1 variational:1 spending:1 novel:1 recently:1 lagoudakis:1 superior:5 witten:2 extend:1 tail:1 expressing:1 nst:25 versa:1 cambridge:3 automatic:2 tuning:4 consistency:1 grid:2 funded:1 moving:1 specification:1 similarity:1 base:1 add:2 something:1 recent:2 perspective:1 reverse:1 certain:1 vt:4 exploited:1 george:1 period:1 monotonically:1 reduces:1 match:1 adapt:1 faster:1 offer:2 minimising:1 long:1 equally:1 visit:3 niranjan:1 infeng:1 denominator:1 essentially:1 controller:1 iteration:1 represent:1 achieved:1 want:1 fine:1 decreased:1 interval:1 unlike:1 fell:1 shane:3 sure:1 thing:2 call:3 presence:1 concerned:1 switch:3 opposite:1 reduce:1 idea:2 simplifies:1 minimise:2 whether:1 expression:1 effort:1 action:1 useful:1 clear:1 tune:2 amount:1 discount:2 reduced:3 specifies:1 outperform:1 hlq:3 nsf:1 hutter1:2 estimated:1 correctly:1 discrete:1 affected:1 putting:1 nevertheless:1 changing:1 kept:1 graph:2 geometrically:2 sum:4 run:16 powerful:1 place:1 reasonable:1 vn:1 pushed:1 quadratic:1 replaces:1 kronecker:2 ns0:1 speed:2 performing:1 relatively:3 department:1 combination:1 poor:2 across:3 smaller:1 slightly:3 making:1 s1:1 happens:1 hl:36 equation:21 mind:1 end:9 slower:1 original:2 running:1 move:3 already:1 quantity:2 occurs:1 rt:21 usual:2 unclear:1 unable:3 reason:1 nicta:1 marcus:2 ru:6 code:1 length:1 difficult:1 trace:9 policy:2 unknown:3 observation:2 markov:22 sm:1 finite:1 beat:2 defining:1 situation:3 gridworld:4 introduced:2 pair:1 rsise:1 required:1 boost:1 able:1 force:2 rummery:2 improve:1 imply:1 created:2 sn:1 understanding:1 acknowledgement:1 loss:3 expect:2 interesting:1 proven:1 switched:1 agent:5 consistent:2 principle:5 row:5 summary:1 changed:3 repeat:1 last:5 free:3 keeping:1 generalise:1 benefit:2 transition:14 superficially:1 world:1 stuck:1 reinforcement:14 jump:1 adaptive:2 approximate:1 sk:19 why:1 learn:1 expanding:1 complex:1 did:1 main:2 s2:1 allowed:1 canberra:1 explicit:2 wish:1 lugano:1 watkins:6 learns:1 rk:7 down:2 transitioning:1 bad:1 specific:1 r2:1 explored:1 experimented:1 evidence:2 importance:1 phd:2 anu:1 forget:1 simply:4 happening:1 lstd:2 ch:1 adaption:1 ma:1 vetta:2 goal:3 presentation:1 king:1 towards:2 replace:2 considerable:2 experimentally:3 change:5 specifically:3 typical:1 uniformly:1 reducing:1 averaging:1 called:2 e:4 experimental:1 est:32 formally:3 college:1 mark:1 latter:1 tested:1 |
2,526 | 3,291 | What Makes Some POMDP Problems Easy to Approximate?
David Hsu?
Wee Sun Lee?
?
Nan Rong?
?
Department of Computer Science
National University of Singapore
Singapore, 117590, Singapore
Department of Computer Science
Cornell University
Ithaca, NY 14853, USA
Abstract
Point-based algorithms have been surprisingly successful in computing approximately optimal solutions for partially observable Markov decision processes
(POMDPs) in high dimensional belief spaces. In this work, we seek to understand
the belief-space properties that allow some POMDP problems to be approximated
efficiently and thus help to explain the point-based algorithms? success often observed in the experiments. We show that an approximately optimal POMDP solution can be computed in time polynomial in the covering number of a reachable
belief space, which is the subset of the belief space reachable from a given belief
point. We also show that under the weaker condition of having a small covering
number for an optimal reachable space, which is the subset of the belief space
reachable under an optimal policy, computing an approximately optimal solution
is NP-hard. However, given a suitable set of points that ?cover? an optimal reachable space well, an approximate solution can be computed in polynomial time.
The covering number highlights several interesting properties that reduce the complexity of POMDP planning in practice, e.g., fully observed state variables, beliefs
with sparse support, smooth beliefs, and circulant state-transition matrices.
1
Introduction
Computing an optimal policy for a partially observable Markov decision process (POMDP) is an
intractable problem [10, 9]. Intuitively, the intractability is due to the ?curse of dimensionality?:
the belief space B used in solving a POMDP typically has dimensionality equal to |S|, the number
of states in the POMDP, and therefore the size of B grows exponentially with |S|. As a result, the
number of states is often used in practice as an important measure of the complexity of POMDP
planning. However, in recent years, point-based POMDP algorithms have made impressive progress
in computing approximate solutions by sampling the belief space: POMDPs with hundreds of states
have been solved in a matter of seconds [14, 4]. It seems surprising that even an approximate
solution can be obtained in seconds in a space of hundreds of dimensions. Thus, we would like to
investigate why these point-based algorithms work well, whether there are sub-classes of POMDPs
that are computationally easier, and whether there are alternative measures that better capture the
complexity of POMDP planning for point-based algorithms.
Our work is motivated by a benchmark problem called Tag [11], in which a robot needs to search
and tag a moving target that tends to move away from it. The environment is modeled as a grid.
The robot?s position is fully observable. The target?s position is not observable, i.e., unknown to
the robot, unless the target is in the same grid position as the robot. The joint state of the robot
and target positions is thus only partially observable. The problem has 870 states in total, resulting
in a belief space of 870 dimensions. Tag was introduced in the work on Point-Based Value Iteration (PBVI) [11], one of the first point-based POMDP algorithms. At the time, it was among the
largest POMDP problems ever attempted and was considered a challenge for fast, scalable POMDP
algorithms [11]. Surprisingly, only two years later, another point-based algorithm [14] computed an
approximate solution to Tag, a problem with an 870-dimensional belief space, in less than a minute!
One important feature that underlies the success of many point-based algorithms is that they only
explore a subset R(b0 ) ? B, usually called the reachable space from b0 . The reachable space R(b0 )
contains all points reachable from a given initial belief point b0 ? B under arbitrary sequences of
actions and observations. One may then speculate that the reason for point-based algorithms? good
performance on Tag is that its reachable space R(b0 ) has much lower dimensionality than B. This
is, however, not true. By checking the dimensionality of a large set of points sampled from R(b0 ),
we have found that the dimensionality of R(b0 ) is at least 860 and thus almost as large as B.
In this paper, we propose to use the covering number as an alternative measure of the complexity
of POMDP planning ( Section 4). Intuitively, the covering number of a space is the minimum
number of given size balls that needed to cover the space fully. We show that an approximately
optimal POMDP solution can be computed in time polynomial in the covering number of R(b0 ).
The covering number also reveals that the belief space for Tag behaves more like the union of
some 29-dimensional spaces rather than an 870-dimensional space, as the robot?s position is fully
observed. Therefore, Tag is probably not as hard as it was thought to be, and the covering number
captures the complexity of the Tag problem better than the dimensionality of the belief space (the
number of states) or the dimensionality of the reachable space.
We further ask whether it is possible to compute an approximate solution efficiently under the weaker
condition of having a small covering number for an optimal reachable R? (b0 ), which contains only
points in B reachable from b0 under an optimal policy. Unfortunately, we can show that this problem
is NP-hard. The problem remains NP-hard, even if the optimal policies have a compact piecewiselinear representation using ?-vectors. However, we can also show that given a suitable set of points
that ?cover? R? (b0 ) well, a good approximate solution can be computed in polynomial time. Together, the negative and the positive results indicate that using sampling to approximate an optimal
reachable space, and not just the reachable space, may be a promising approach in practice. We have
already obtained initial experimental evidence that supports this idea. Through careful sampling and
pruning, our new point-based algorithm solved the Tag problem in less than 5 seconds [4].
The covering number highlights several properties that reduce the complexity of POMDP planning
in practice, and it helps to quantify their effects (Section 5). Highly informative observations usually
result in beliefs with sparse support and substantially reduce the covering number. For example, fully
observed state variables reduce the covering number by a doubly exponential factor. Interestingly,
smooth beliefs, usually a result of imperfect actions and uninformative observations, also reduce
the covering number. In addition, state-transition matrices with special structures, such as circulant
matrices [1], restrict the space of reachable beliefs and reduce the covering number correspondingly.
2
Related Works
POMDPs provide a principled mathematical framework for planning and decision-making under
uncertainty [13, 5], but they are notoriously hard to solve [10, 7, 9, 8]. It has been shown that finding
an optimal policy over the entire belief space for a finite-horizon POMDP is PSPACE-complete [10]
and that finding an optimal policy over an infinite horizon is undecidable [9].
As a result, there has been a lot of work on computing approximate POMDP solutions [2], including
a number of point-based POMDP algorithms [16, 11, 15, 14, 3]. Some point-based algorithms were
able to compute reasonably good policies for very large POMDPs with hundreds of thousands states.
The success of these algorithms motivated us to try to understand why and when they work well.
The approximation errors of some point-based algorithms have been analyzed [11, 14], but these
analyses do not address the general question of when an approximately optimal policy can be computed efficiently in polynomial time. We provide both positive and negative results showing the
difficulty of computing approximate POMDP solutions. The proof techniques used for Theorems 1
and 2 are similar to those used for analyzing an approximation algorithm for large (fully observable)
MDPs [6]. While the algorithm in [6] handles large state spaces well, it does not run in polynomial
time: it appears that additional assumptions such as those made in this paper are required for polynomial time results. Our hardness result is closely related to that for finite-horizon POMDPs [8], but
we give a direct reduction from the Hamiltonian cycle problem.
3
Preliminaries
A POMDP models an agent taking a sequence of actions under uncertainty to maximize its total
reward. Formally it is specified as a tuple (S, A, O, T , Z, R, ?), where S is a set of discrete states,
A is a finite set of actions, and O is a set of discrete observations. At each time step, the agent
takes some action a ? A and moves from a start state s to an end state s0 . The end state s0 is
given by a state-transition function T (s, a, s0 ) = p(s0 |s, a), which gives the probability that the
agent lies in s0 , after taking action a in state s. The agent then makes an observation to gather
information on its current state. The outcome of observing o ? O is given by an observation
function Z(s, a, o) = p(o|s, a) for s ? S and a ? A. The reward function R gives the agent a
real-valued reward R(s, a) if it takes action a in state s, and the goal of the agent is to maximize
its expected total reward by choosing a suitable sequence of actions. In this paper, we consider
only
infinite-horizon POMDPs with discounted reward. Thus, the expected total reward is given by
P?
E[ t=0 ? t R(st , at )], where ? ? (0, 1) is a discount factor, and st and at denote the agent?s state
and the action at time t.
Since the agent?s state is only partially observable, we rely on the concept of a belief, which is
simply a probability distribution over S, represented disretely as a vector.
A POMDP solution is a policy ? that specifies the action ?(b) for every belief b. Our goal is to find
an optimal policy ? ? that maximizes the expected total reward. A policy ? induces a value function
V ? that specifies the value V ? (b) of every belief b under ?. It is known that V ? , the value function
associated the optimal policy ? ? , can be approximated arbitrarily closely by a convex, piecewiselinear function V (b) = max??? (? ? b), where ? is a finite set of vectors called ?-vectors.
The optimal value function V ? satisfies the following Lipschitz condition:
Lemma 1 For any two belief points b and b0 , if ||b ? b0 || ? ?, then |V ? (b) ? V ? (b0 )| ?
Rmax 1
1?? ?.
Throughout this paper, we always use the l1 metric
b0
to measure the distance between belief points: for
Pd
b, b0 ? Rd , ||b ? b0 || = i=1 |bi ? b0i |. The Lipsa2
a1
chitz condition bounds the change of a value function using the distance between belief points. It proo1 o2
vides the basis for approximating the value at a belief point by the values of other belief points nearby.
To find an approximately optimal policy, pointbased algorithms explore only the reachable belief
space R(b0 ) from a given initial belief point b0 .
Strictly speaking, these algorithms compute only a
Figure 1: The belief tree rooted at b0 .
policy over R(b0 ), rather than the entire belief space
B. We can view the exploration of R(b0 ) as searching a belief tree TR rooted at b0 (Figure 1). The
nodes of TR correspond to beliefs in R(b0 ). The edges correspond to action-observation pairs. Suppose that a child node b0 is connected to its parent
b by an edge (a, o). We can compute b0 using
P
the formula b0 (s0 ) = ? (b, a, o) = ?Z(s0 , a, o) s T (s, a, s0 )b(s), where ? is a normalizing constant.
After obtaining enough belief points from R(b0 ), point-based algorithms perform backup operations
over them to compute an approximately optimal value function.
4
The Covering Number and the Complexity of POMDP Planning
Our first goal is to show that if the covering number of a reachable space R(b0 ) is small, then an
approximately optimal policy in R(b0 ) can be computed efficiently. We start with the definition of
the covering number:
Definition 1 Given a metric space X, a ?-cover of a set B ? X is a set of point C ? X such that
for every point b ? B, there is a point c ? C with ||b ? c|| < ?. If all the points in C also lie in
B, then we say that C is a proper cover of B. The ?-covering number of B, denoted by C(?), is the
size of the smallest ?-cover of B.
Intuitively, the covering number is equal to the minimum number of balls of radius ? needed to cover
the set B. A closely related notion is that of the packing number:
Definition 2 Given a metric space X, a ?-packing of a set B ? X is a set of points P ? B such
that for any two points p1 , p2 ? P , ||p1 ? p2 || ? ?. The ?-packing number of a set B, denoted by
P(?), is the size of the largest ?-packing of B.
1
The proofs of this and other results are available as an appendix at http://motion.comp.nus.edu.
sg/papers/nips07.pdf.
For any set B, the following relationship holds between packing and covering numbers.
Lemma 2 C(?) ? P(?) ? C(?/2).
We are now ready to state our first main result. It shows that for any point b0 ? B, if the covering
number of R(b0 ) grows polynomially with the parameters of interest, then a good approximation of
the value at b0 can be computed in polynomial time.
Theorem 1 For any b0 ? B, let C(?) be the ?-covering number of R(b0 ). Given any constant > 0,
an approximation V (b0 ) of V ? (b0 ), with error |V ? (b0 ) ? V (b0 )| ? , can be found in time
!
2
(1 ? ?)2
(1 ? ?)
O C
log?
.
4?Rmax
2Rmax
Proof. To prove the result, we give an algorithm that computes the required approximation. It
performs a depth-first search on a depth-bounded belief tree and uses approximate memorization to
avoid unnecessarily computing the values of very similar beliefs. Intuitively, to achieve a polynomial
time algorithm, we bound the height of the tree by exploiting the discount factor and bound the width
of the tree by exploiting the covering number.
We perform the depth-first search recursively on a belief tree TR that has root b0 and height h, while
maintaining a ?-packing of R(b0 ) at every level of TR . Suppose that the search encounters a new
belief node b at level i of TR . If b is within a distance ? of a point b0 in the current packing at level i,
we set V (b) = V (b0 ), abort the recursion at b, and backtrack. Otherwise, we recursively search the
children of b. When the search returns, we perform a backup operation to compute V (b) and add b
to the packing at level i. If b is a leaf node of TR , we set V (b) = 0. We build a separate packing at
each level of TR , as each level has a different approximation error.
We now calculate the values for h and ? required to achieve the given approximation bound at b0 .
Let i = |V ? (b) ? V (b)| denote the approximation error for a node b at level i of TR , if the recursive
search continues in the children of b. By convention, the leaf nodes are at level 0. Similarly, let 0i
denote the error for b, if the search aborts at b and V (b) = V (b0 ) for some b0 in the packing at level
i. Hence,
0i = |V ? (b) ? V (b0 )|
? |V ? (b) ? V ? (b0 )| + |V ? (b0 ) ? V (b0 )|
Rmax
?
? + i ,
1??
where the last inequality uses Lemma 1 and the definition of i . Clearly, 0 ? Rmax /(1 ? ?). To
calculate i for a node b at level i, we establish a recurrence. The children of b, which are at level
i ? 1, have error at most 0i?1 . Since a backup operation is performed at b, we have i ? ?0i?1 and
max
thus the recurrence i ? ?(i?1 + R1??
?). Expanding the recurrence, we find that the error h at
the root b0 is given by
?Rmax (1 ? ? h )
Rmax
|V ? (b0 ) ? V (b0 )| ?
? + ?h
(1 ? ?)2
1??
?Rmax
Rmax
.
?
? + ?h
(1 ? ?)2
1??
By setting ? =
(1??)2
2?Rmax
and h = log?
(1??)
2Rmax ,
we can guarantee |V ? (b0 ) ? V (b0 )| ? .
We now work out the running time of the algorithm. For each node b in the packings, the algorithm
expands it by calculating the beliefs and the corresponding values for all its children and performing
a backup operation at b to compute V (b). It takes O(|S|2 ) time to calculate the belief at a child
node. We then perform a nearest neighbor search in O(P(?)|S|) time to check whether the child
node lies within a distance ? of any point in the packing at that level. Since b has |A||O| children,
the expansion operation takes O(|A||O||S|(|S| + P(?)) time. The backup operation then computes
V (b) as an average of its children?s values, weighted by the probabilities specified by the observation
function, and takes only O(|A||O|) time. Since there are h packings of size P(?) each and by
Lemma 2, P(?) ? C(?/2), the total running time of our algorithm is given by
O (hC(?/2)|A||O||S|(|S| + C(?/2))) .
We assume that |S|, |A|, and |O| are constant to focus on the dependency on the covering number,
and the above expression then becomes O(hC(?/2)2 ). Substituting in the values for h and ?, we get
the final result. 2
The algorithm in the above proof can be used on-line to choose an approximately optimal action at
b0 . We first estimate the values for all the child nodes of b0 and then select the action resulting in
the highest value. Suppose that at each belief point reachable from b0 , we perform such an on-line
search for action selection. Using the technique in [12], one can show that if the value function
approximations at all the child nodes have error at most , then the policy ? implicitly defined by
the on-line search has approximation error |V ? (b) ? V ? (b)| ? 2?/(1 ? ?) for all b in R(b0 ).
Instead of performing the on-line search, one may want to precompute an approximately optimal
value function over R(b0 ) and perform one-step look-ahead on it at runtime for action selection.
The algorithm in Theorem 1 is not sufficient for this purpose, as it samples only enough points from
R(b0 ) to give a good value estimate at b0 , but the sampled points do not form a cover of R(b0 ). One
possibility would be to find a cover of R(b0 ) first and then apply PBVI [11] over the points in the
cover. Unfortunately, we do not know how to find a cover of R(b0 ) efficiently. Instead, we give a
randomized algorithm that computes an approximately optimal value function with high probability.
Roughly, this algorithm incrementally builds a packing of R(b0 ) at each level of TR . It first runs
the algorithm in Theorem 1 to obtain an initial packing Pi for each level i and estimate the values
of belief points in Pi . Then, to test whether the current packing Pi covers R(b0 ) well, it runs a
set of simulations of a fixed size. If the simulations encounter new points not covered by Pi , we
estimate their values and insert them into Pi . The process repeats until no more new belief points
are discovered within a set of simulation. We show that if the set of simulations is sufficiently large,
then the probability that in any future run of the policy, we encounter new belief points not covered
by the final set of packings can be made arbitrarily small.
Theorem 2 For any b0 ? B, let C(?) be the ?-covering number of R(b0 ). Given constants ? ? (0, 1)
and > 0, a randomized algorithm can compute, with probability at least 1 ? ?, an approximately
optimal value function in time
2
!
Rmax
(1 ? ?)3
(1 ? ?)3
(1 ? ?)
1
(1 ? ?)
O
C
C
log?
log
log?
.
(1 ? ?)
16?Rmax
4Rmax
?
16?Rmax
4Rmax
?
such that the one-step look-ahead
policy
? induced by this value function has error |V (b0 ) ?
(1??)3
?
time to use this value function to select an action at
V (b0 )| ? . It takes O C 16?Rmax
runtime.
Both theorems above assume tha a small covering number of R(b0 ) for efficient computation. To
relax this assumption, we may require only that the covering number for an optimal reachable space
R? (b0 ) is small, as R? (b0 ) contains only points reachable under an optimal policy and can be much
smaller than R(b0 ). Unfortunately, under the relaxed condition, approximating the value at b0 is
NP-hard. We prove this by reduction from the Hamiltonian cycle problem. The main idea is to
show that a Hamiltonian cycle exists in a given graph if and only an approximation to V ? (b0 ), with
a suitably chosen error, can be computed for a POMDP whose optimal reachable space R? (b0 ) has
a small covering number. The result is closely related to one for finite-horizon POMDPs [8].
Theorem 3 Given constant > 0, computing an approximation V (b0 ) of V ? (b0 ), with error
|V (b0 ) ? V ? (b0 )| ? |V ? (b0 )|, is NP-hard, even if the covering number of R? (b0 ) is polynomialsized.
The result above assumes the standard encoding of POMDP input with state-transition functions,
observation functions, and reward functions all represented discretely by matrices of suitable sizes.
By slightly extending the proof of Theorem 3, we can also show a related hardness result, which
assumes that the optimal policy has a compact representation.
Theorem 4 Given constant > 0, computing an approximation V (b0 ) of V ? (b0 ), with error
|V (b0 ) ? V ? (b0 )| ? |V ? (b0 )|, is NP-hard, even if the number of ?-vectors required to represent an
optimal policy is polynomial-sized.
On the other hand, if an oracle provides us a proper cover of an optimal reachable space R? (b0 ),
then a good approximation of V ? (b0 ) can be found efficiently.
Theorem 5 For any b0 ? B, given constant > 0 and a proper ?-cover C of R? (b0 ) with ? =
(1??)2
?
?
2?Rmax , an approximation V (b0 ) of V (b0 ), with error |V (b0 ) ? V (b0 )| ? , can be found in time
(1 ? ?)
O |C|2 + |C| log?
.
2RM ax
Together, the negative and the positive results (Theorems 3 to 5) indicate that a key difficulty for
point-based algorithms lies in finding a cover for R? (b0 ). In practice, to overcome the difficulty,
one may use problem-specific knowledge or heuristics to approximate R? (b0 ) through sampling.
Most point-based POMDP algorithms [11, 15, 14] interpolate the value function using ?-vectors.
Although we use the nearest neighbor approximation to simplify the proofs of Theorems 1, 2, and
5, we want to point out that very similar results can be obtained using the ?-vector representation if
we slightly modify the analysis of the approximation errors in the proofs.
5
Bounding the Covering Number
The covering number highlights several properties that reduce the complexity of POMDP planning
in practice. We describe them below and show how they affect the covering number.
5.1
Fully Observed State Variables
Suppose that there are d state variables, each of which has at most k possible values. If d0 of these
variables are fully observed, then for every such belief point, its vector representation contains at
0
most m = k d?d non-zero elements out of k d elements in total. For a given initial belief b0 , the
belief vectors with the same non-zero pattern form a subspace in R(b0 ), and R(b0 ) is a union of these
subspaces. We can compute a ?-cover for each subspace by discretizing each non-zero element of
m
the belief vectors to an accuracy of ?/m, and the size of the resulting ?-cover is at most ( m
? ) . There
0
0
0
d?d0
d?d0
m
d k
k
.
are k d such subspaces. So the ?-covering number of R(b0 ) is at most k d ( m
? ) =k ( ? )
The fully observed variables thus give a doubly exponential reduction in the covering number: it
0
0
reduces the exponent by a factor of k d at the cost of a multiplicative factor of k d .
Proposition 1 Suppose that a POMDP has d state variables, each of which has at most k possible
values. If d0 state variables are fully observed, then for any belief point b0 , the ?-covering number
d?d0
0
d?d0
of the reachable belief space R(b0 ) is at most k d ( k ? )k
.
Consider again the Tag problem described in Section 1. The state consists of both the robot?s and
the target?s positions, as well as the status indicating whether the target is tagged. The robot and
the target can occupy any position in an environment modeled as a grid of 29 cells. If the robot has
the target tagged, they must be in the same position. So, there are 29 ? 29 + 29 = 870 states in
total, and the belief space B is 870-dimensional. However, the robot?s position is fully observed. By
Proposition 1, the ?-covering number is at most 30 ? (30/?)30 . Indeed, for Tag, any reachable belief
space R(b0 ) is effectively a union of two sets. One set corresponds to the case when the target is
not tagged and consists of the union of 29 sub-spaces of 29 dimensions. The other set corresponds
to the case when the target is tagged and consists of exactly 29 points. Clearly, the covering number
captures the underlying complexity of R(b0 ) more accurately than the dimensionality of R(b0 ).
5.2
Sparse Beliefs
Highly informative observations often result in sparse beliefs, i.e., beliefs whose vector representation is sparse. For example, in the Tag problem, the state is known exactly if the robot and the target
are in the same position, leaving only a single non-zero element in the belief vector. Fully observed
state variables usually result in very sparse beliefs and can be considered a special case.
If the beliefs are always sparse, we can exploit the sparsity to bound the covering number. Otherwise,
sparsity may still give a hint that the covering number is smaller than what would be suggested by
the dimensionality of the belief space. By exploiting the non-zeros patterns of belief vectors in a
way similar to that in Section 5.1, we can derive the following result:
Proposition 2 Let B be a set in an n-dimensional belief space. If every belief in B can be
represented
mas a vector with at most m non-zero elements, then the ?-covering number of B is
).
O(nm m
?
5.3
Smooth Beliefs
Sparse beliefs are often peaky. Interestingly, when the beliefs are sufficiently smooth, e.g., when
their Fourier representations are sparse, the covering number is also small. Below we give a more
general result, assuming that the beliefs can be represented as a linear combination of a small number
of basis vectors.
Proposition 3 Let B be a set in an n-dimensional belief space. Assume that every belief b ? B
can be represented as a linear combination of m basis vectors such that the magnitudes of both the
elements of the basis vectors and the coefficients representing b are bounded by a constant C. The ?2
2
covering number of B is O(( 2C ?mn )m ) when the basis vectors are real-valued, and O(( 4C ?mn )2m )
when they are complex-valued.
Smooth beliefs are usually a result of actions with high uncertainty and uninformative observations.
5.4
Circulant State-Transition Matrices
Let us now shift our attention from observations to actions, in particular, actions that can be represented by state-transition matrices with special structures. We start with an example. A mobile
robot scout needs to navigate from a known start position to a goal position in a large environment
modeled as a grid. It must not enter certain danger zones to avoid detection by enemies. The robot
can take four actions to move in the {N, S, E, W } directions, but have imperfect control. Since
the environment is large, we assume that the robot always operates far away from the boundary and
the boundary effect can be ignored. At each grid cell, the robot moves to the intended cell with
probability 1 ? p and moves diagonally to the two cells adjacent to the intended one with probability
0.5p. The robot can use its sensors to make highly accurate observations on its current position, but
by doing so, it runs the risk of being detected.
Under our assumptions, the state-transition functions representing robot actions are invariant over
the grid cells and can thus be represented by circulant matrices [1]. Circulant matrices are widely
used in signal processing and control theory, as they can represent all discrete-time linear translationinvariant systems. In the context of POMDPs, if applying a state-transition matrix to a belief b
corresponds to convolution with a suitable distribution, then the state-transition matrix is circulant.
One of the key properties of circulant matrices is that they all share the same eigenvectors. Therefore,
we can multiply them in any arbitrary order and obtain the same result. In our example, this means
that given a set of robot moves, we can apply them in any order and the resulting belief on the robot?s
position is the same. This greatly reduces the number of possible beliefs and correspondingly the
covering number in open-loop POMDPs, where there are no observations involved.
Proposition 4 Suppose that all ` state-transition matrices representing actions are circulant and
that each matrix has at most m eigenvalues whose magnitudes are greater than ?, with 0 < ? < 1.
In an open-loop POMDP, for any point b
0 in an n-dimensional
belief space, the ?-covering number
8`mn 2`m
`
+ h , where h = log? (?/2n).
of the reachable belief space R(b0 ) is O
?
In our example, suppose that the robot scout makes a sequences of moves and needs to decide when
to take occasional observations along the way to localize itself. To bound the covering number, we
divide the sequence of moves into subsequences such that each subsequence starts with an observation and ends right before the next observation. In each subsequence, the robot starts at a specific
belief and moves without additional observations. So, within a subsequence, the beliefs encountered
2`m
have a ?-cover of size O((8`mn/?)
+ h` ) by Proposition 4. Furthermore, since all the observations are highly informative, we assume that the initial beliefs of all subsequences can be represented
as vectors
with at0 most m0 non-zero elements. The set of all initial beliefs then has a ?-cover of size
0
O(nm (m0 /?)m ) by Proposition 2. From Lemma 3 below, we know that in an open-loop POMDP,
two belief trajectories can only get closer to each other, as they progress.
Lemma 3 Let M be a Markov matrix and ||b1 ? b2 || ? ?. Then ||M b1 ? M b2 || ? ?.
Therefore, to get a ?-cover of the space R(b0 ) that the robot scout can reach from a given b0 , it
suffices to first compute a ?/2-cover C of the initial belief points for all possible subsequences
of moves and then take the union of the ?/2-covers of the belief points traversed by the subsequences whose initial belief points lie in C. The ?-cover of R(b0 ) then has its size bounded by
0
0
O(nm (2m0 /?)m (16`mn/?)2`m + h` ), where h = log? (?/4n).
The requirement of translation invariance means that circulant matrices have some limitations in
modeling certain phenomena well. In mobile robot navigation, obstacles or boundaries in the environment often cause difficulties. However, if the environment is sufficiently large and the obstacles
are sparse, the behaviors of some systems can be approximated by circulant matrices.
6
Conclusion
We propose the covering number as a measure of the complexity of POMDP planning. We believe
that for point-based algorithms, the covering number captures the difficulty of computing approximate solutions to POMDPs better than other commonly used measures, such as the number of states.
The covering number highlights several interesting properties that reduce the complexity of POMDP
planning, and quantifies their effects. Using the covering number, we have shown several results that
help to identify the main difficulty of POMDP planning using point-based algorithms. These results
indicate that a promising approach in practice is to approximate an optimal reachable space through
sampling. We are currently exploring this idea and have already obtained promising initial results
[4]. On a set of standard test problems, our new point-based algorithm outperformed the fastest
existing point-based algorithm by 5 to 10 times on some problems, while remaining competitive on
others.
Acknowledgements. We thank Leslie Kaelbling and Tom?as Lozano-P?erez for many insightful discussions on
POMDPs. This work is supported in part by NUS ARF grants R-252-000-240-112 and R-252-000-243-112.
References
[1] R.M. Gray. Toeplitz and Circulant Matrices: A Review. Now Publishers Inc, 2006.
[2] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. J.
Artificial Intelligence Research, 13:33?94, 2000.
[3] J. Hoey, A. von Bertoldi, P. Poupart, and A. Mihailidis. Assisting persons with dementia during handwashing using a partially observable Markov decision process. In Proc. Int. Conf. on Vision Systems,
2007.
[4] D. Hsu, W.S. Lee, and N. Rong. Accelerating point-based POMDP algorithms through successive approximations of the optimal reachable space. Technical Report TRA4/07, National University of Singapore.
School of Computing, April 2007.
[5] L.P. Kaelbling, M.L. Littman, and A.R. Cassandra. Planning and acting in partially observable stochastic
domains. Artificial Intelligence, 101(1?2):99?134, 1998.
[6] M. Kearns, Y. Mansour, and A.Y. Ng. A sparse sampling algorithm for near optimal planning in large
Markov decision processes. Machine Learning, 49(2-3):193?208, 2002.
[7] M.L. Littman. Algorithms for sequential decision making. PhD thesis, Dept. of Computer Science, Brown
University, 1996.
[8] C. Lusena, J. Goldsmith, and M. Mundhenk. Nonapproximability results for partially observable Markov
decision processes. J. Artificial Intelligence Research, 14:83?103, 2002.
[9] O. Madani, S. Hanks, and A. Condon. On the undecidability of probabilistic planning and infinite-horizon
partially observable Markov decision problems. In Proc. Nat. Conf. on Artificial Intelligence, pages 541?
548, 1999.
[10] C. Papadimitriou and J.N. Tsisiklis. The complexity of Markov decision processes. Mathematics of
Operations Research, 12(3):441?450, 1987.
[11] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In
Proc. Int. Jnt. Conf. on Artificial Intelligence, pages 477?484, 2003.
[12] S.P. Singh and R.C. Yee. An upper bound on the loss from approximate optimal-value functions. Machine
Learning, 16(3):227?233, 1994.
[13] R.D. Smallwood and E.J. Sondik. The optimal control of partially observable Markov processes over a
finite horizon. Operations Research, 21:1071?1088, 1973.
[14] T. Smith and R. Simmons. Point-based POMDP algorithms: Improved analysis and implementation. In
Proc. Uncertainty in Artificial Intelligence, 2005.
[15] M.T.J. Spaan and N. Vlassis. A point-based POMDP algorithm for robot planning. In Proc. IEEE Int.
Conf. on Robotics & Automation, 2004.
[16] N.L. Zhang and W. Zhang. Speeding up the convergence of value iteration in partially observable Markov
decision processes. Journal of Artificial Intelligence Research, 14:29?51, 2002.
| 3291 |@word polynomial:10 seems:1 suitably:1 open:3 seek:1 simulation:4 condon:1 tr:9 recursively:2 reduction:3 initial:10 contains:4 interestingly:2 o2:1 existing:1 current:4 surprising:1 must:2 mundhenk:1 informative:3 intelligence:7 leaf:2 hamiltonian:3 smith:1 provides:1 node:12 successive:1 zhang:2 height:2 mathematical:1 along:1 bertoldi:1 direct:1 prove:2 doubly:2 consists:3 expected:3 indeed:1 behavior:1 p1:2 planning:15 hardness:2 roughly:1 discounted:1 curse:1 becomes:1 bounded:3 underlying:1 maximizes:1 what:2 rmax:18 substantially:1 finding:3 hauskrecht:1 guarantee:1 every:7 expands:1 runtime:2 exactly:2 rm:1 control:3 grant:1 positive:3 before:1 modify:1 tends:1 encoding:1 analyzing:1 approximately:12 fastest:1 bi:1 practice:7 union:5 recursive:1 danger:1 thought:1 get:3 selection:2 risk:1 context:1 memorization:1 applying:1 yee:1 attention:1 convex:1 pomdp:36 smallwood:1 handle:1 searching:1 notion:1 simmons:1 target:11 suppose:7 us:2 element:7 approximated:3 peaky:1 continues:1 observed:10 solved:2 capture:4 thousand:1 calculate:3 cycle:3 sun:1 connected:1 highest:1 principled:1 environment:6 pd:1 complexity:12 reward:8 littman:2 singh:1 solving:1 basis:5 packing:17 joint:1 represented:8 fast:1 describe:1 detected:1 artificial:7 outcome:1 choosing:1 whose:4 heuristic:1 widely:1 solve:1 valued:3 say:1 relax:1 otherwise:2 enemy:1 toeplitz:1 itself:1 final:2 sequence:5 eigenvalue:1 propose:2 loop:3 pbvi:2 achieve:2 exploiting:3 parent:1 convergence:1 requirement:1 r1:1 extending:1 help:3 derive:1 nearest:2 school:1 b0:112 progress:2 p2:2 indicate:3 quantify:1 convention:1 direction:1 radius:1 closely:4 stochastic:1 exploration:1 require:1 suffices:1 preliminary:1 proposition:7 traversed:1 rong:2 strictly:1 insert:1 hold:1 exploring:1 sufficiently:3 considered:2 substituting:1 m0:3 smallest:1 purpose:1 proc:5 outperformed:1 currently:1 largest:2 weighted:1 clearly:2 sensor:1 always:3 rather:2 avoid:2 b0i:1 cornell:1 mobile:2 pointbased:1 arf:1 ax:1 focus:1 jnt:1 check:1 greatly:1 typically:1 entire:2 among:1 denoted:2 exponent:1 special:3 equal:2 having:2 ng:1 sampling:6 unnecessarily:1 look:2 future:1 papadimitriou:1 np:6 others:1 simplify:1 hint:1 report:1 gordon:1 wee:1 national:2 interpolate:1 madani:1 undecidability:1 intended:2 detection:1 interest:1 investigate:1 highly:4 possibility:1 multiply:1 analyzed:1 navigation:1 accurate:1 tuple:1 edge:2 closer:1 unless:1 tree:6 divide:1 modeling:1 obstacle:2 cover:23 leslie:1 cost:1 kaelbling:2 subset:3 hundred:3 successful:1 dependency:1 st:2 person:1 randomized:2 lee:2 probabilistic:1 together:2 again:1 von:1 nm:3 thesis:1 choose:1 conf:4 return:1 speculate:1 b2:2 automation:1 nips07:1 matter:1 coefficient:1 inc:1 int:3 performed:1 root:2 multiplicative:1 later:1 try:1 lot:1 view:1 observing:1 start:6 doing:1 competitive:1 undecidable:1 accuracy:1 efficiently:6 correspond:2 identify:1 accurately:1 backtrack:1 trajectory:1 pomdps:13 notoriously:1 comp:1 explain:1 reach:1 definition:4 involved:1 proof:7 associated:1 hsu:2 sampled:2 ask:1 knowledge:1 anytime:1 dimensionality:9 appears:1 tom:1 improved:1 april:1 hank:1 furthermore:1 just:1 until:1 hand:1 incrementally:1 abort:2 pineau:1 gray:1 believe:1 grows:2 usa:1 effect:3 concept:1 true:1 brown:1 lozano:1 tagged:4 hence:1 adjacent:1 piecewiselinear:2 during:1 width:1 recurrence:3 covering:49 rooted:2 pdf:1 complete:1 goldsmith:1 performs:1 l1:1 motion:1 behaves:1 at0:1 exponentially:1 translationinvariant:1 enter:1 rd:1 grid:6 mathematics:1 similarly:1 erez:1 reachable:27 moving:1 robot:24 impressive:1 add:1 recent:1 certain:2 inequality:1 discretizing:1 success:3 arbitrarily:2 minimum:2 additional:2 relaxed:1 greater:1 maximize:2 signal:1 assisting:1 reduces:2 d0:6 smooth:5 technical:1 a1:1 scalable:1 underlies:1 vision:1 metric:3 iteration:3 represent:2 pspace:1 robotics:1 cell:5 addition:1 uninformative:2 want:2 leaving:1 publisher:1 ithaca:1 probably:1 induced:1 near:1 easy:1 enough:2 affect:1 restrict:1 imperfect:2 reduce:8 idea:3 shift:1 whether:6 motivated:2 expression:1 accelerating:1 speaking:1 cause:1 action:22 ignored:1 covered:2 eigenvectors:1 discount:2 induces:1 http:1 specifies:2 occupy:1 singapore:4 discrete:3 key:2 four:1 localize:1 graph:1 year:2 run:5 uncertainty:4 almost:1 throughout:1 decide:1 decision:11 appendix:1 bound:7 nan:1 encountered:1 discretely:1 oracle:1 ahead:2 tag:12 nearby:1 fourier:1 performing:2 department:2 ball:2 precompute:1 combination:2 smaller:2 slightly:2 spaan:1 making:2 intuitively:4 invariant:1 hoey:1 computationally:1 remains:1 needed:2 know:2 end:3 available:1 operation:8 apply:2 occasional:1 away:2 alternative:2 encounter:3 assumes:2 running:2 remaining:1 maintaining:1 calculating:1 exploit:1 build:2 establish:1 approximating:2 move:10 already:2 question:1 subspace:4 distance:4 separate:1 thank:1 thrun:1 poupart:1 reason:1 assuming:1 modeled:3 relationship:1 unfortunately:3 negative:3 implementation:1 proper:3 policy:21 unknown:1 perform:6 upper:1 observation:19 convolution:1 markov:11 benchmark:1 finite:6 vlassis:1 ever:1 discovered:1 mansour:1 arbitrary:2 david:1 introduced:1 pair:1 required:4 specified:2 nu:2 address:1 able:1 suggested:1 usually:5 below:3 pattern:2 sparsity:2 challenge:1 including:1 max:2 belief:82 suitable:5 difficulty:6 rely:1 recursion:1 mn:5 representing:3 mdps:1 ready:1 speeding:1 review:1 sg:1 acknowledgement:1 checking:1 fully:12 loss:1 highlight:4 interesting:2 limitation:1 agent:8 gather:1 sufficient:1 s0:8 intractability:1 scout:3 pi:5 share:1 translation:1 diagonally:1 surprisingly:2 last:1 repeat:1 supported:1 allow:1 understand:2 weaker:2 circulant:11 neighbor:2 taking:2 correspondingly:2 sparse:11 overcome:1 dimension:3 depth:3 transition:10 boundary:3 computes:3 made:3 commonly:1 far:1 polynomially:1 approximate:15 observable:14 compact:2 pruning:1 implicitly:1 status:1 reveals:1 b1:2 subsequence:7 search:12 quantifies:1 why:2 promising:3 reasonably:1 expanding:1 obtaining:1 expansion:1 hc:2 complex:1 lusena:1 domain:1 main:3 backup:5 bounding:1 child:11 ny:1 sub:2 position:14 exponential:2 lie:5 minute:1 theorem:12 formula:1 specific:2 navigate:1 showing:1 insightful:1 dementia:1 evidence:1 normalizing:1 intractable:1 exists:1 sequential:1 effectively:1 phd:1 magnitude:2 nat:1 horizon:7 cassandra:1 easier:1 simply:1 explore:2 partially:11 corresponds:3 satisfies:1 tha:1 ma:1 sondik:1 goal:4 sized:1 careful:1 lipschitz:1 hard:8 change:1 infinite:3 vides:1 operates:1 acting:1 lemma:6 kearns:1 called:3 total:8 invariance:1 experimental:1 attempted:1 indicating:1 formally:1 select:2 zone:1 support:3 dept:1 phenomenon:1 |
2,527 | 3,292 | Fast Variational Inference
for Large-scale Internet Diagnosis
John C. Platt
Emre K?c?man
Microsoft Research
1 Microsoft Way
Redmond, WA 98052
{jplatt,emrek,dmaltz}@microsoft.com
David A. Maltz
Abstract
Web servers on the Internet need to maintain high reliability, but the cause
of intermittent failures of web transactions is non-obvious. We use approximate Bayesian inference to diagnose problems with web services. This
diagnosis problem is far larger than any previously attempted: it requires
inference of 104 possible faults from 105 observations. Further, such inference must be performed in less than a second. Inference can be done at
this speed by combining a mean-field variational approximation and the
use of stochastic gradient descent to optimize a variational cost function.
We use this fast inference to diagnose a time series of anomalous HTTP
requests taken from a real web service. The inference is fast enough to
analyze network logs with billions of entries in a matter of hours.
1
Introduction
Internet content providers, such as MSN, Google and Yahoo, all depend on the correct
functioning of the wide-area Internet to communicate with their users and provide their
services. When these content providers lose network connectivity with some of their users,
it is critical that they quickly resolve the problem, even if the failure lies outside their own
systems. 1 One challenge is that content providers have little direct visibility into the
wide-area Internet infrastructure and the causes of user request failures. Requests may fail
because of problems in the content provider?s systems or faults in the network infrastructure
anywhere between the user and the content provider, including routers, proxies, firewalls,
and DNS servers. Other failing requests may be due to denial of service attacks or bugs in
the user?s software. To compound the diagnosis problem, these faults may be intermittent:
we must use probabilistic inference to perform diagnosis, rather than using logic.
A second challenge is the scale involved. Not only do popular Internet content providers
receive billions of HTTP requests a week, but the number of potential causes of failure are
numerous. Counting only the coarse-grained Autonomous Systems (ASes) through which
users receive Internet connectivity, there are over 20k potential causes of failure. In this
paper, we show that approximate Bayesian inference scales to handle this high rate of
observations and accurately estimates the underlying failure rates of such a large number of
potential causes of failure.
To scale Bayesian inference to Internet-sized problems, we must make several simplifying
approximations. First, we introduce a bipartite graphical model using overlapping noisyORs, to model the interactions between faults and observations. Second, we use mean1
A loss of connectivity to users translates directly into lost revenue and a sullied reputation for
content providers, even if the cause of the problem is a third-party network component.
1
field variational inference to map the diagnosis problem to a reasonably-sized optimization
problem. Third, we further approximate the integral in the variational method. Fourth, we
speed up the optimization problem using stochastic gradient descent.
The paper is structured as follows: Section 1.1 discusses related work to this paper. We
describe the graphical model in Section 2, and the approximate inference in that model
in Section 2.1, including stochastic gradient descent (in Section 3). We present inference
results on synthetic and real data in Section 4 and then draw conclusions.
1.1
Previous Work
The original application of Bayesian diagnosis was medicine. One of the original diagnosis network was QMR-DT [14], a bipartite graphical model that used noisy-OR to model
symptoms given diseases. Exact inference in such networks is intractable (exponential in the
number of positive symptoms,[2]), so different approximation and sampling algorithms were
proposed. Shwe and Cooper proposed likelihood-weighted sampling [13], while Jaakkola
and Jordan proposed using a variational approximation to unlink each input to the network [3]. With only thousands of possible symptoms and hundreds of diseases, QMR-DT
was considered very challenging.
More recently, researchers have applied Bayesian techniques for the diagnosis of computers
and networks [1][12][16]. This work has tended to avoid inference in large networks, due to
speed constraints. In contrast, we attack the enormous inference problem directly.
2
Graphical model of diagnosis
Beta
Bernoulli
Noisy
OR
Figure 1: The full graphical model for the diagnosis of Internet faults
The initial graphical model for diagnosis is shown in Figure 1. Starting at the bottom, we
observe a large number of binary random variables, each corresponding to the success/failure
of a single HTTP request. The failure of an HTTP request can be modeled as a noisy-OR [11]
of a set of Bernoulli-distributed binary variables, each of which models the underlying factors
that can cause a request to fail:
Y
P (Vi = fail|Dij ) = 1 ? (1 ? ri0 ) (1 ? rij dij ),
(1)
j
where rij is the probability that the observation is a failure if a single underlying fault dij
is present. The matrix rij is typically very sparse, because there are only a small number of
possible causes for the failure of any request. The ri0 parameter models the probability of a
spontaneous failure without any known cause. The rij are set by elicitation of probabilities
from an expert.
The noisy-OR models the causal structure in the network, and its connections are derivable
from the metadata associated with the HTTP request. For example, a single request can fail
2
Figure 2: Graphical model after integrating out instantaneous faults: a bipartite noisy-OR
network with Beta distributions as hidden variables
because its server has failed, or because a misconfigured or overloaded router can cause an
AS to lose connectivity to the content provider, or because the user agent is not compatible
with the service. All of these underlying causes are modeled independently for each request,
because possible faults in the system can be intermittent.
Each of the Bernoulli variables Dij depends on an underlying continuous fault rate variable
Fj ? [0, 1]:
d
P (Dij |Fj = ?j ) = ?j ij (1 ? ?j )1?dij ,
(2)
where ?j is the probability of a fault manifesting at any time. We model the Fj as independent Beta distributions, one for each fault:
p(Fj = ?j ) =
1
?0 ?1
?j j (1
0
0
B(?j , ?j )
0
? ?j )?j ?1 ,
(3)
where B is the beta function. The fan-out for each of these fault rates can be different:
some of these fault rates are connected to many observations, while less common ones are
connected to fewer.
~ ) in order to identify hidden faults
Our goal is to model the posterior distribution P (F~ |V
and track them through time. The existence of the Dij random variable is a nuisance. We
~ V
~ ) for any Dij : the distribution of instantaneous problems is
do not want to estimate P (D|
not interesting. Fortunately, we can exactly integrate out these nuisance variables, because
they are connected to only one observation thru a noisy-OR.
After integrating out the Dij , the graphical model is shown in Figure 2. The model is now
completely analogous to the QMR-DT mode [14], but instead of the noisy-OR combining
binary random variables, they combine rate variables:
Y
P (Vi = fail|Fj = ?j ) = 1 ? (1 ? ri0 ) (1 ? rij ?j ).
(4)
j
One can view (4) as a generalization of a noisy-OR to continuous [0, 1] variables.
2.1
Approximations to make inference tractable
In order to scale inference up to 104 hidden variables, and 105 observations, we choose a
simple, robust approximate inference algorithm: mean-field variational inference [4]. Mean~ ) with a factorized distribution.
field variational inference approximates the posterior P (F~ |V
For inferring fault rates, we choose to approximate P with a product of beta distributions
Y
Y
1
? ?1
~)=
~)=
q(Fj |V
Q(F~ |V
?j j (1 ? ?j )?j ?1 .
(5)
B(?
,
?
)
j
j
j
j
3
Mean-field variational inference maximizes a lower bound on the evidence of the model:
Z
~ ?)p(~
?)
~ ) log P (V |~
d~
?.
(6)
max L = Q(~
?|V
~)
~
?
~ ,?
Q(~
?|V
This integral can be broken into two terms: a cross-entropy between the approximate posterior and the prior, and an expected log-likelihood of the observations:
Z
D
E
~)
?|V
~ ) log Q(~
~ |F~ ) .
max L = ? Q(~
?|V
d~
? + log P (V
(7)
~
p(~
?)
Q
?
~ ,?
The first integral is the negative of a sum of cross-entropies between Beta distributions with
a closed form:
!
B(?j0 , ?j0 )
DKL (qj ||pj ) = log
+ (?j ? ?j0 )?(?j )
(8)
B(?j , ?j )
+(?j ? ?j0 )?(?j ) ? (?j + ?j ? ?j0 ? ?j0 )?(?j + ?j ),
where ? is the digamma function.
However, the expected log likelihood of a noisy-OR integrated over a product of Beta distributions does not have an analytic form. Therefore, we employ the MF(0) approximation
of Ng and Jordan [9], replacing the expectation of the log likelihood with the log likelihood
of the expectation. The second term then becomes the sum of a set of log likelihoods, one
per observation:
(
Q
log 1 ? (1 ? ri0 ) j [1 ? rij ?j /(?j + ?j )]
if Vi = 1 (failure);
L(Vi ) =
(9)
P
log(1 ? ri0 ) + j log[1 ? rij ?j /(?j + ?j )] if Vi = 0 (success).
For the Internet diagnosis case, the MF(0) approximation is reasonable: we expect the
posterior distribution to be concentrated around its mean, due to the large amount of data
that is available. Ng and Jordan [9] have have proved accuracy bounds for MF(0) based on
the number of parents that an observation has.
The final cost function for a minimization routine then becomes
X
X
min C =
DKL (qj ||pj ) ?
L(Vi ).
~
?
~ ,?
3
j
(10)
i
Variational inference by stochastic gradient descent
In order to apply unconstrained optimization algorithms to minimize (10), we need transform
the variables: only positive ?j and ?j are valid, so we parameterize them by
?j = eaj ,
?j = ebj .
and the gradient computation becomes
?
?
X ?DKL (qj ||pj ) X ?L(Vi )
?C
?.
= ?j ?
?
?aj
??
??
j
j
j
i
(11)
(12)
with a similar gradient for bj . Note that this gradient computation can be quite computationally expensive, given that i sums over all of the observations.
For Internet diagnosis, we can decompose the observation stream into blocks, where the size
of the block is determined by how quickly the underlying rates of faults change, and how
finely we want to sample those rates. We typically use blocks of 100,000 observations, which
can make the computation of the gradient expensive. Further, we repeat the inference over
and over again, on thousands of blocks of data: we prefer a fast optimization procedure over
a highly accurate one.
Therefore, we investigated the use of stochastic gradient descent for optimizing the variational cost function. Stochastic gradient descent approximates the full gradient with a
4
Algorithm 1 Variational Gradient Descent
Require: Noisy-OR parameters rij , priors ?j0 , ?j0 , observations Vi
Initialize aj = log(?j0 ), bj = log(?j0 )
Initialize yi , zj to 0
for k = 1 to number of epochs do
for all Faults j do
?j = exp(aj ), ?j = exp(bj )
yj ? ?yj + (1 ? ?)?DKL (qj ||pj ; ?j , ?j )/?aj
zj ? ?zj + (1 ? ?)?DKL (qj ||pj ; ?j , ?j )/?bj
aj ? aj ? ?yj
bj ? bj ? ?zj
end for
for all Observations i do
for all Parent faults j of observation vi do
?j = exp(aj ), ?j = exp(bj )
end for
for all Parent faults j of observation vi do
~
yj ? ?yj ? (1 ? ?)?L(Vi ; ?
~ , ?)/?a
j
~
zj ? ?zj ? (1 ? ?)?L(Vi ; ?
~ , ?)/?bj
aj ? aj ? ?yj
bj ? bj ? ?zj
end for
end for
end for
single term from the gradient: the state of the optimization is updated using that single
term [5]. This enables the system to converge quickly to an approximate answer. The details
of stochastic gradient descent are shown in Algorithm 1.
Estimating the sum in equation (12) with a single term adds a tremendous amount of noise
to the estimates. For example, the sign of a single L(Vi ) gradient term depends only on
the sign of Vi . In order to reduce the noise in the estimate, we use momentum [15]: we
exponentially smooth the gradient with a first-order filter before applying it to the state
variables. This momentum modification is shown in Algorithm 1. We typically use a large
step size (? = 0.1) and momentum term (? = 0.99), in order to both react quickly to changes
in the fault rate and to smooth out noise.
Stochastic gradient descent can be used as a purely on-line method (where each data point
is seen only once), setting the ?number of epochs? in Algorithm 1 to 1. Alternatively, it can
get higher accuracy if it is allowed to sweep through the data multiple times.
3.1
Other possible approaches
We considered and tested several other approaches to solving the approximate inference
problem.
Jaakkola and Jordan propose a variational inference method for bipartite noisy-OR networks [3], where one variational parameter is introduced to unlink one observation from
the network. We typically have far more observations than possible faults: this previous
approach would have forced us to solve very large optimization problems (with 100,000 parameters). Instead, we solve an optimization that has dimension equal to the number of
faults.
We originally optimized the variational cost function (10) with both BFGS and the trustregion algorithm in the Matlab optimization toolbox. This turned out to be far worse than
stochastic gradient descent. We found that a C# implementation of L-BFGS, as described in
Nocedal and Wright [10] sped up the exact optimization by orders of magnitude. We report
on the L-BFGS performance, below: it is within 4x the speed of the stochastic gradient
descent.
5
We experimented with Metropolis-Hastings to sample from the posterior, using a Gaussian
random walk in (aj , bj ). We found that the burn-in time was very long. Also, each update
is slow, because the speed of a single update depends on the fan-out of each fault. In the
Internet diagnosis network, the fan-out is quite high (because a single fault affects many
observations). Thus, Metropolis-Hastings was far slower than variational inference.
We did not try loopy belief propagation [8], nor expectation propagation [6]. Because the
Beta distribution is not conjugate to the noisy OR, the messages passed by either algorithm
do not have a closed form.
Finally, we did not try the idea of learning to predict the posterior from the observations
by sampling from the generative model and learning the reverse mapping [7]. For Internet
diagnosis, we do not know the structure of graphical model for a block of data ahead of
time: the structure depends on the metadata for the requests in the log. Thus, we cannot
amortize the learning time of a predictive model.
4
Results
We test the approximations and optimization methods used for Internet diagnosis on both
synthetic and real data.
4.1
Synthetic data with known hidden state
Testing the accuracy of approximate inference is very difficult, because, for large graphical
models, the true posterior distribution is intractable. However, we can probe the reliability
of the model on a synthetic data set.
We start by generating fault rates from a prior (here, 2000 faults drawn from Beta(5e3,1)). We randomly generate connections from faults to observations, with probability
5 ? 10?3 . Each connection has a strength rij drawn randomly from [0, 1]. We generate
100,000 observations from the noisy-OR model (4). Given these observations, we predict an
approximate posterior.
Given that the number of observations is much larger than the number of faults, we expect
that the posterior distribution should tightly cluster around the rate that generated the
observations. Difference between the true rate and the mean of the approximate posterior
should reflect inaccuracies in the estimation.
0.08
0.06
Error of rate estimate
0.04
0.02
0
-0.02
-0.04
-0.06
-0.08
-0.1
-0.12
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 3: The error in estimate of rate versus true underlying rate. Black dots are L-BFGS,
Red dots are Stochastic Gradient Descent with 20 epochs.
The results for a run is shown in Figure 3. The figure shows that the errors in the estimate
are small enough to be very useful for understanding network errors. There is a slight
systematic bias in the stochastic gradient descent, as compared to L-BFGS. However, the
improvement in speed shown in Table 1 is worth the loss of accuracy: we need inference to
6
be as fast as possible to scale to billions of samples. The run times are for a uniprocessor
Pentium 4, 3 GHz, with code in C#.
Algorithm
L-BFGS
SGD, 1 epoch
SGD, 20 epochs
Accuracy
(RMSE)
0.0033
0.0343
0.0075
Time
(CPU sec)
38
0.5
11.7
Table 1: Accuracy and speed on synthetic data set
4.2
Real data from web server logs
We then tested the algorithm on real data from a major web service. Each observation
consists of a success or failure of a single HTTP request. We selected 18848 possible faults
that occur frequently in the dataset, including the web server that received the request,
which autonomous system that originated the request, and which ?user agent? (brower or
robot) generated the request.
We have been analyzing HTTP logs collected over several months with the stochastic gradient descent algorithm. In this paper, we present an analysis of a short 2.5 hour window
containing an anomalously high rate of failures, in order to demonstrate that our algorithm can help us understand the cause of failures based on observations in a real-world
environment.
We
the
the
the
broke the time series of observations into blocks of 100,000 observations, and inferred
hidden rates for each block. The initial state of the optimizer was set to be the state of
optimizer at convergence of the previous block. Thus, for stochastic gradient descent,
momentum variables were carried forward from block to block.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
8:00 PM 8:29 PM 8:57 PM 9:26 PM 9:55 PM 10:24 PM 10:53 PM 11:21 PM
Figure 4: The inferred fault rate for two Autonomous Systems, as a function of time. These
are the only two faults with high rate.
The results of this tracking experiment are shown in Figure 4. In this figure, we used
stochastic gradient descent and a Beta(0.1,100) prior. The figure shows the only two faults
whose probability went higher than 0.1 in this time interval: they correspond to two ASes in
the same city, both causing failures at roughly the same time. This could be due to a router
that is in common between them, or perhaps an denial of service attack that originated in
that city.
The speed of the analysis is much faster than real time. For a data set of 10 million
samples, L-BFGS required 209 CPU seconds, while SGD (with 3 passes of data per block)
only required 51 seconds. This allows us to go through logs containing billions of entries in
a matter of hours.
7
5
Conclusions
This paper presents high-speed variational inference to diagnose problems on the scale of
the Internet. Given observations at a web server, the diagnosis can determine whether a web
server needs rebooting, whether part of the Internet is broken, or whether the web server is
compatible with a browser or user agent.
In order to scale inference up to Internet-sized diagnosis problems, we make several approximations. First, we use mean-field variational inference to approximate the posterior
distribution. The expected log likelihood inside of the variational cost function is approximated with the MF(0) approximation. Finally, we use stochastic gradient descent to perform
the variational optimization.
We are currently using variational stochastic gradient descent to analyze logs that contain
billions of requests. We are not aware of any other applications of variational inference at
this scale. Future publications will include conclusions of such analysis, and implications
for web services and the Internet at large.
References
[1] M. Chen, A. X. Zheng, J. Lloyd, M. I. Jordan, and E. Brewer. Failure diagnosis using
decision trees. In Proc. Int?l. Conf. Autonomic Computing, pages 36?43, 2004.
[2] D. Heckerman. A tractable inference algorithm for diagnosing multiple diseases. In
Proc. UAI, pages 163?172, 1989.
[3] T. Jaakkola and M. Jordan. Variational probabilistic inference and the QMR-DT
database. Journal of Artificial Intelligence Research, 10:291?322, 1999.
[4] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to
variational methods for graphical models. Machine Learning, 37:183?233, 1999.
[5] H. J. Kushner and G. G. Yin. Stochastic Approximation and Recursive Algorithms and
Applications. Springer-Verlag, 2003.
[6] T. P. Minka. Expectation propagation for approximate bayesian inference. In Proc.
UAI, pages 362?369, 2001.
[7] Q. Morris. Recognition networks for approximate inference in BN20 networks. In Proc.
UAI, pages 370?37, 2001.
[8] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy belief propagation for approximate
inference: An empirical study. In Proc. UAI, pages 467?475, 1999.
[9] A. Y. Ng and M. Jordan. Approximate inference algorithms for two-layer bayesian
networks. In Proc. NIPS, pages 533?539, 1999.
[10] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 2nd edition, 2006.
[11] J. Pearl. Probabilistic Reasoning In Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
[12] I. Rish, M. Brodie, and S. Ma. Accuracy vs. efficiency tradeoffs in probabilistic diagnosis. In Proc. AAAI, pages 560?566, 2001.
[13] M. A. Shwe and G. F. Cooper. An empirical analysis of likelihood-weighting simulation
on a large, multiply-connected medical belief network. Computers and Biomedical
Research, 24(5):453?475, 1991.
[14] M. A. Shwe, B. Middleton, D. E. Heckerman, M. Henrion, E. J. Horvitz, H. P. Lehmann,
and G. F. Cooper. Probabilistic diagnosis using a reformulation of the INTERNIST1/QMR knowledge base. Methods of Information in Medicine, 30(4):241?255, 1991.
[15] J. J. Shynk and S. Roy. The LMS algorithm with momentum updating. In Proc. Intl.
Symp. Circuits and Systems, pages 2651?2654, 1988.
[16] M. Steinder and A. Sethi. End-to-end service failure diagnosis using belief networks.
In Proc. Network Operations and Management Symposium, pages 375?390, 2002.
8
| 3292 |@word nd:1 simulation:1 simplifying:1 sgd:3 initial:2 series:2 horvitz:1 rish:1 com:1 router:3 must:3 john:1 numerical:1 analytic:1 enables:1 visibility:1 update:2 v:1 generative:1 fewer:1 selected:1 intelligence:1 short:1 infrastructure:2 coarse:1 attack:3 diagnosing:1 direct:1 beta:10 symposium:1 consists:1 combine:1 symp:1 inside:1 introduce:1 expected:3 roughly:1 nor:1 frequently:1 resolve:1 little:1 cpu:2 window:1 becomes:3 estimating:1 underlying:7 maximizes:1 factorized:1 circuit:1 rebooting:1 exactly:1 platt:1 medical:1 positive:2 service:9 before:1 qmr:5 analyzing:1 black:1 burn:1 challenging:1 yj:6 testing:1 lost:1 block:11 recursive:1 procedure:1 j0:10 area:2 empirical:2 integrating:2 get:1 cannot:1 applying:1 optimize:1 map:1 middleton:1 go:1 starting:1 independently:1 react:1 handle:1 autonomous:3 analogous:1 updated:1 spontaneous:1 user:10 exact:2 sethi:1 roy:1 expensive:2 approximated:1 recognition:1 updating:1 database:1 bottom:1 rij:9 parameterize:1 thousand:2 connected:4 went:1 disease:3 environment:1 broken:2 denial:2 depend:1 solving:1 predictive:1 purely:1 bipartite:4 efficiency:1 completely:1 forced:1 fast:5 describe:1 artificial:1 outside:1 eaj:1 quite:2 larger:2 solve:2 whose:1 plausible:1 browser:1 transform:1 noisy:13 final:1 propose:1 interaction:1 product:2 causing:1 turned:1 combining:2 bug:1 billion:5 parent:3 cluster:1 convergence:1 intl:1 generating:1 help:1 ij:1 received:1 correct:1 filter:1 stochastic:18 msn:1 broke:1 require:1 generalization:1 decompose:1 dns:1 around:2 considered:2 wright:2 exp:4 mapping:1 week:1 bj:11 predict:2 brodie:1 lm:1 major:1 optimizer:2 failing:1 estimation:1 proc:9 lose:2 currently:1 city:2 weighted:1 minimization:1 gaussian:1 rather:1 avoid:1 anomalously:1 jaakkola:4 publication:1 improvement:1 bernoulli:3 likelihood:8 contrast:1 digamma:1 pentium:1 inference:40 typically:4 integrated:1 hidden:5 yahoo:1 initialize:2 field:6 once:1 equal:1 aware:1 ng:3 sampling:3 future:1 report:1 intelligent:1 employ:1 randomly:2 tightly:1 murphy:1 microsoft:3 maintain:1 message:1 highly:1 zheng:1 multiply:1 implication:1 accurate:1 integral:3 unlink:2 tree:1 walk:1 causal:1 loopy:2 cost:5 entry:2 hundred:1 jplatt:1 dij:9 answer:1 synthetic:5 probabilistic:5 systematic:1 quickly:4 connectivity:4 again:1 reflect:1 aaai:1 management:1 containing:2 choose:2 worse:1 conf:1 expert:1 potential:3 bfgs:7 sec:1 lloyd:1 int:1 matter:2 vi:14 depends:4 stream:1 performed:1 view:1 try:2 diagnose:3 closed:2 analyze:2 red:1 start:1 rmse:1 minimize:1 accuracy:7 kaufmann:1 correspond:1 identify:1 bayesian:7 accurately:1 provider:8 worth:1 researcher:1 uniprocessor:1 tended:1 failure:19 involved:1 minka:1 obvious:1 associated:1 proved:1 dataset:1 popular:1 knowledge:1 routine:1 higher:2 dt:4 originally:1 wei:1 done:1 symptom:3 anywhere:1 biomedical:1 hastings:2 web:11 replacing:1 overlapping:1 propagation:4 google:1 mode:1 aj:10 perhaps:1 contain:1 true:3 functioning:1 manifesting:1 nuisance:2 demonstrate:1 fj:6 autonomic:1 reasoning:1 variational:24 instantaneous:2 recently:1 common:2 sped:1 exponentially:1 million:1 slight:1 approximates:2 unconstrained:1 pm:8 reliability:2 dot:2 robot:1 add:1 base:1 posterior:11 own:1 optimizing:1 reverse:1 compound:1 verlag:1 server:8 binary:3 success:3 fault:32 yi:1 seen:1 morgan:1 fortunately:1 converge:1 determine:1 full:2 multiple:2 smooth:2 faster:1 cross:2 long:1 dkl:5 anomalous:1 expectation:4 receive:2 want:2 interval:1 finely:1 pass:1 jordan:9 counting:1 enough:2 affect:1 reduce:1 idea:1 tradeoff:1 translates:1 qj:5 bn20:1 whether:3 passed:1 e3:1 cause:12 matlab:1 useful:1 amount:2 morris:1 concentrated:1 http:7 generate:2 zj:7 sign:2 track:1 per:2 diagnosis:22 reformulation:1 enormous:1 drawn:2 pj:5 nocedal:2 sum:4 run:2 fourth:1 communicate:1 lehmann:1 reasonable:1 draw:1 decision:1 prefer:1 bound:2 internet:18 layer:1 fan:3 strength:1 ahead:1 occur:1 constraint:1 software:1 speed:9 min:1 structured:1 ri0:5 request:18 conjugate:1 heckerman:2 metropolis:2 modification:1 taken:1 computationally:1 equation:1 previously:1 discus:1 brewer:1 fail:5 know:1 tractable:2 end:7 available:1 operation:1 apply:1 observe:1 probe:1 slower:1 existence:1 original:2 include:1 kushner:1 graphical:11 medicine:2 ghahramani:1 sweep:1 gradient:26 collected:1 code:1 modeled:2 difficult:1 negative:1 implementation:1 perform:2 observation:31 descent:18 intermittent:3 inferred:2 david:1 overloaded:1 introduced:1 required:2 toolbox:1 connection:3 optimized:1 tremendous:1 hour:3 inaccuracy:1 nip:1 pearl:1 redmond:1 elicitation:1 below:1 challenge:2 including:3 max:2 belief:4 critical:1 numerous:1 carried:1 shynk:1 metadata:2 thru:1 prior:4 epoch:5 understanding:1 emre:1 loss:2 expect:2 interesting:1 versus:1 revenue:1 integrate:1 agent:3 proxy:1 compatible:2 repeat:1 bias:1 understand:1 wide:2 trustregion:1 saul:1 sparse:1 distributed:1 ghz:1 dimension:1 valid:1 world:1 forward:1 far:4 party:1 transaction:1 approximate:17 derivable:1 logic:1 uai:4 alternatively:1 continuous:2 reputation:1 table:2 reasonably:1 robust:1 as:2 investigated:1 did:2 noise:3 edition:1 shwe:3 allowed:1 amortize:1 cooper:3 slow:1 inferring:1 momentum:5 originated:2 exponential:1 lie:1 third:2 weighting:1 grained:1 experimented:1 maltz:1 evidence:1 intractable:2 magnitude:1 chen:1 mf:4 entropy:2 yin:1 failed:1 tracking:1 springer:2 ma:1 sized:3 goal:1 month:1 man:1 content:8 change:2 henrion:1 determined:1 attempted:1 tested:2 |
2,528 | 3,293 | A Game-Theoretic Approach to Apprenticeship
Learning
Umar Syed
Computer Science Department
Princeton University
35 Olden St
Princeton, NJ 08540-5233
usyed@cs.princeton.edu
Robert E. Schapire
Computer Science Department
Princeton University
35 Olden St
Princeton, NJ 08540-5233
schapire@cs.princeton.edu
Abstract
We study the problem of an apprentice learning to behave in an environment with
an unknown reward function by observing the behavior of an expert. We follow
on the work of Abbeel and Ng [1] who considered a framework in which the true
reward function is assumed to be a linear combination of a set of known and observable features. We give a new algorithm that, like theirs, is guaranteed to learn
a policy that is nearly as good as the expert?s, given enough examples. However,
unlike their algorithm, we show that ours may produce a policy that is substantially
better than the expert?s. Moreover, our algorithm is computationally faster, is easier to implement, and can be applied even in the absence of an expert. The method
is based on a game-theoretic view of the problem, which leads naturally to a direct
application of the multiplicative-weights algorithm of Freund and Schapire [2] for
playing repeated matrix games. In addition to our formal presentation and analysis
of the new algorithm, we sketch how the method can be applied when the transition function itself is unknown, and we provide an experimental demonstration of
the algorithm on a toy video-game environment.
1
Introduction
When an agent is faced with the task of learning how to behave in a stochastic environment, a common approach is to model the situation using a Markov Decision Process. An MDP consists of
states, actions, rewards and a transition function. Once an MDP has been provided, the usual objective is to find a policy (i.e. a mapping from states to actions) that maximizes expected cumulative
reward collected by the agent.
Building the MDP model is usually the most difficult part of this process. One reason is that it is
often hard to correctly describe the environment?s true reward function, and yet the behavior of the
agent is quite sensitive to this description. In practice, reward functions are frequently tweaked and
tuned to elicit what is thought to be the desired behavior. Instead of maximizing reward, another
approach often taken is to observe and follow the behavior of an expert in the same environment.
Learning how to behave by observing an expert has been called apprenticeship learning, with the
agent in the role of the apprentice.
Abbeel and Ng [1] proposed a novel and appealing framework for apprenticeship learning. In this
framework, the reward function, while unknown to the apprentice, is assumed to be equal to a linear
combination of a set of known features. They argued that while it may be difficult to correctly
describe the reward function, it is usually much easier to specify the features on which the reward
function depends.
1
With this setting in mind, Abbeel and Ng [1] described an efficient algorithm that, given enough
examples of the expert?s behavior, produces a policy that does at least as well as the expert with
respect to the unknown reward function. The number of examples their algorithm requires from the
expert depends only moderately on the number of features.
While impressive, a drawback of their results is that the performance of the apprentice is both upperand lower-bounded by the performance of the expert. Essentially, their algorithm is an efficient
method for mimicking the expert?s behavior. If the behavior of the expert is far from optimal, the
same will hold for the apprentice.
In this paper, we take a somewhat different approach to apprenticeship learning that addresses this
issue, while also significantly improving on other aspects of Abbeel and Ng?s [1] results. We pose
the problem as learning to play a two-player zero-sum game in which the apprentice chooses a
policy, and the environment chooses a reward function. The goal of the apprentice is to maximize
performance relative to the expert, even though the reward function may be adversarially selected by
the environment with respect to this goal. A key property of our algorithm is that it is able to leverage
prior beliefs about the relationship between the features and the reward function. Specifically, if it is
known whether a feature is ?good? (related to reward) or ?bad? (inversely related to reward), then the
apprentice can use that knowledge to improve its performance. As a result, our algorithm produces
policies that can be significantly better than the expert?s policy with respect to the unknown reward
function, while at the same time are guaranteed to be no worse.
Our approach is based on a multiplicative weights algorithm for solving two-player zero-sum games
due to Freund and Schapire [2]. Their algorithm is especially well-suited to solving zero-sum games
in which the ?game matrix? is extremely large. It turns out that our apprenticeship learning setting
can be viewed as a game with this property.
Our results represent a strict improvement over those of Abbeel and Ng [1] in that our algorithm
is considerably simpler, provides the same lower bound on the apprentice?s performance relative to
the expert, and removes the upper bound on the apprentice?s performance. Moreover, our algorithm
requires less computational expense ? specifically, we are able to achieve their performance guarantee after only O(ln k) iterations, instead of the O(k ln k), where k is the number of features on
which the reward function depends. Additionally, our algorithm can be applied to a setting in which
no examples are available from the expert. In that case, our algorithm produces a policy that is optimal in a certain conservative sense. We are also able to extend our algorithm to a situation where
the MDP?s transition function ? is unknown. We conducted experiments from a small car driving
simulation that illustrate some of our theoretical findings.
Ratliff et al [3] formulated a related problem to apprenticeship learning, in which the goal is to find
a reward function whose optimal policy is similar to the expert?s policy. Quite different from our
work, mimicking the expert was an explicit goal of their approach.
2
Preliminaries
Our problem setup largely parallels that outlined in Abbeel and Ng [1]. We are given an infinitehorizon Markov Decision Process in which the reward function has been replaced by a set of
features. Specifically, we are given an MDP\R M = (S, A, ?, D, ?, ? ), consisting of finite
state and action sets S and A, discount factor ?, initial state distribution D, transition function
?(s, a, s0 ) , Pr(st+1 = s0 | st = s, at = a), and a set of k features defined by the function
? : S ? Rk .
?(s),
The true reward function R? is unknown. For ease of exposition, we assume that R? (s) = w? ??
for some w? ? Rk , although we also show how our analysis extends to the case when this does not
hold.
For any policy ? in M , the value of ? (with respect to the initial state distribution) is defined by
V (?) , E
"?
X
t=0
#
? R (st ) ?, ?, D .
t
2
?
where the initial state s0 is chosen according to D, and the remaining states are chosen according to
? and ?. We also define a k-length feature expectations vector,
"?
#
X
t
?(?) , E
? ? (st ) ?, ?, D .
t=0
From its definition, it should be clear that ?feature expectations? is a (somewhat misleading) abbre?(s),
viation for ?expected, cumulative, discounted feature values.? Importantly, since R? (s) = w? ??
we have V (?) = w? ? ?(?), by linearity of expectation.
? is an -good estimate of ?(?) if k?
? ? ?(?)k? ? .
We say that a feature expectations vector ?
Likewise, we say that a policy ?
? is -optimal for M if |V (?
? ) ? V (? ? )| ? , where ? ? is an optimal
policy for M , i.e. ? ? = arg max? V (?).1
We also assume that there is a policy ?E , called the expert?s policy, which we are able to observe
executing in M . Following Abbeel and Ng [1], our goal is to find a policy ? such that V (?) ?
V (?E ) ? , even though the true reward function R? is unknown. We also have the additional goal
of finding a policy when no observations from the expert?s policy are available. In that case, we find
a policy that is optimal in a certain conservative sense.
Like Abbeel and Ng [1], the policy we find will not necessarily be stationary, but will instead be
a mixed policy. A mixed policy ? is a distribution over ?, the set of all deterministic stationary
policies in M . Because ? is finite (though extremely large), we can fix a numbering of the policies
in ?, which we denote ? 1 , . . . , ? |?| . This allows us to treat ? as a vector, where ?(i) is the
probability assigned to ? i . A mixed policy ? is executed by randomly selecting the policy ? i ? ?
at time 0 with probability ?(i), and exclusively following ? i thereafter. It should be noted that the
definitions of value and feature expectations apply to mixed policies as well: V (?) = Ei?? [V (? i )]
and ?(?) = Ei?? [?(? i )]. Also note that mixed policies do not have any advantage over stationary
policies in terms of value: if ? ? is an optimal stationary policy for M , and ? ? is an optimal mixed
policy, then V (? ? ) = V (? ? ).
The observations from the expert?s policy ?E are in the form of m independent trajectories in M ,
each for simplicity of the
same length H. A trajectory is just the sequence of states visited by the
expert: si0 , si1 , . . . , siH for the ith trajectory. Let ?E = ?(?E ) be the expert?s feature expectations.
? E of ?E by averaging the observed feature values from the trajectories:
We compute an estimate ?
m
?E =
?
3
H
1 XX t
? ? (sit ).
m i=0 t=0
Review of the Projection Algorithm
We compare our approach to the ?projection algorithm? of Abbeel and Ng [1], which finds a policy
that is at least as good as the expert?s policy with respect to the unknown reward function.2
?(s) for some w? ? Bk , where
Abbeel and Ng [1] assume that ? (s) ? [0, 1]k , and that R? (s) = w? ??
k
B = {w : kwk1 ? 1}. Given m independent trajectories from the expert?s policy, the projection
algorithm runs for T iterations. It returns a mixed policy ? such that k?(?) ? ?E k2 ? as long as
T and m are sufficiently large. In other words, their algorithm seeks to ?match? the expert?s feature
expectations. The value of ? will necessarily be close to that of the expert?s policy, since
|V (?) ? V (?E )|
= |w? ? ?(?) ? w? ? ?E |
? kw? k2 k?(?) ? ?E k2
?
(1)
where in Eq. (1) we used the Cauchy-Schwartz inequality and kw? k2 ? kw? k1 ? 1.
1
Note that this is weaker than the standard definition of optimality, as the policy only needs to be optimal
with respect to the initial state distribution, and not necessarily at every state simultaneously.
2
Abbeel and Ng [1] actually presented two algorithms for this task. Both had the same theoretical guarantees, but the projection algorithm is simpler and was empirically shown to be slightly faster.
3
The following theorem is the main result in Abbeel and Ng [1]. However, some aspects of their
analysis are not covered by this theorem, such as the complexity of each iteration of the projection algorithm, and the sensitivity of the algorithm to various approximations. These are discussed
immediately below.
Theorem 1 (Abbeel and Ng [1]). Given an MDP\R, and m independent trajectories from an expert?s policy ?E . Suppose we execute the projection algorithm for T iterations. Let ? be the mixed
policy returned by the algorithm. Then in order for
|V (?) ? V (?E )| ?
(2)
to hold with probability at least 1 ? ?, it suffices that
k
k
ln
T ?O
((1 ? ?))2 (1 ? ?)
and
m?
2k
2k
ln .
2
((1 ? ?))
?
We omit the details of the algorithm due to space constraints, but note that each iteration involves
only two steps that are computationally expensive:
1. Find an optimal policy with respect to a given reward function.
2. Compute the feature expectations of a given policy.
The algorithm we present in Section 5 performs these same expensive tasks in each iteration, but
requires far fewer iterations ? just O(ln k) rather than O(k ln k), a tremendous savings when the
number of features k is large. Also, the projection algorithm has a post-processing step that requires
invoking a quadratic program (QP) solver. Comparatively, the post-processing step for our algorithm
is trivial.
Abbeel and Ng [1] provide several refinements of the analysis in Theorem 1. In particular, suppose
that each sample trajectory has length H ? (1/(1 ? ?)) ln(1/(H (1 ? ?))), and that an P -optimal
policy is found in each iteration of the projection algorithm (see Step 1 above). Also let R =
minw?Bk maxs |R? (s) ? w ? ? (s)| be the ?representation error? of the features. Abbeel and Ng
[1] comment at various points in their paper that H , P , and O(R ) should be added to the error
bound of Theorem 1. In Section 5 we provide a unified analysis of these error terms in the context
of our algorithm, and also incorporate an F term that accounts for computing an F -good feature
expectations estimate in Step 2 above. We prove that our algorithm is sensitive to these error terms
in a similar way as the projection algorithm.
4
Apprenticeship Learning via Game Playing
Notice the two-sided bound in Theorem 1: the theorem guarantees that the apprentice will do almost
as well as the expert, but also almost as badly. This is because the value of a policy is a linear
combination of its feature expectations, and the goal of the projection algorithm is to match the
expert?s feature expectations.
We will take a different approach. We assume that ?(s) ? [?1, 1]k , and that R? (s) = w? ? ?(s) for
some w? ? Sk , where Sk = {w ? Rk : kwk1 = 1 and w 0}.3 The impact of this minor change
in the domains of w and ? is discussed further in Section 5.2. Let ? be the set of all mixed policies
in M . Now consider the optimization
v ? = max min [w ? ?(?) ? w ? ?E ] .
??? w?Sk
(3)
Our goal will be to find (actually, to approximate) the mixed policy ? ? that achieves v ? . Since
V (?) = w? ? ?(?) for all ?, we have that ? ? is the policy in ? that maximizes V (?) ? V (?E )
with respect to the worst-case possibility for w? . Since w? is unknown, maximizing for the worstcase is appropriate.
3
We use to denote componentwise inequality. Likewise, we use to denote strict inequality in every
component.
4
We begin by noting that, because w and ? are both distributions, Eq. (3) is in the form of a twoperson zero-sum game. Indeed, this is the motivation for redefining the domain of w as we did.
The quantity v ? is typically called the game value. In this game, the ?min player? specifies a reward
function by choosing w, and the ?max player? chooses a mixed policy ?. The goal of the min player
is to cause the max player?s policy to perform as poorly as possible relative to the expert, and the
max player?s goal is just the opposite. A game is defined by its associated game matrix. In our case,
the game matrix is the k ? |?| matrix
G(i, j) = ?j (i) ? ?E (i)
(4)
where ?(i) is the ith component of ? and we have let ?j = ?(? j ) be the vector of feature expectations for the jth deterministic policy ? j . Now Eq. (3) can be rewritten in the form
v ? = max min wT G?.
??? w?Sk
(5)
In Eq. (3) and (5), the max player plays first, suggesting that the min player has an advantage.
However, the well-known minmax theorem of von Neumann says that we can swap the min and max
operators in Eq. (5) without affecting the game value. In other words,
v ? = max min wT G? = min max wT G?.
??? w?Sk
w?Sk ???
(6)
Finding ? ? will not be useful unless we can establish that v ? ? 0, i.e. that ? ? will do at least as well
as the expert?s policy with respect to the worst-case possibility for w? . This fact is not immediately
clear, since we are restricting ourselves to mixtures of deterministic policies, while we do not assume
that the expert?s policy is deterministic. However, note that in the rightmost expression in Eq. (6),
the maximization over ? is done after w ? and hence the reward function ? has been fixed. So
the maximum is achieved by the best policy in ? with respect to this fixed reward function. Note
that if this is also an optimal policy, then v ? will be nonnegative. It is well-known that in any MDP
there always exists a deterministic optimal policy. Hence v ? ? 0.
In fact, we may have v ? > 0. Suppose it happens that ?(? ? ) ?(?E ). Then ? ? will dominate ?E ,
i.e. ? ? will have higher value than ?E regardless of the actual value of w? , because we assumed
that w? 0. Essentially, by assuming that each component of the true weight vector is nonnegative,
we are assuming that we have correctly specified the ?sign? of each feature. This means that, other
things being equal, a larger value for each feature implies a larger reward.
So when v ? > 0, the mixed policy ? ? to some extent ignores the expert, and instead exploits prior
knowledge about the true reward function encoded by the features. We present experimental results
that explore this aspect of our approach in Section 7.
5
The Multiplicative Weights for Apprenticeship Learning (MWAL)
Algorithm
In the previous section, we motivated the goal of finding the mixed policy ? ? that achieves the
maximum in Eq. (3) (or equivalently, in Eq. (5)). In this section we present an efficient algorithm
for solving this optimization problem.
Recall the game formulated in the previous section. In the terminology of game theory, w and ? are
called strategies for the min and max player respectively , and ? ? is called an optimal strategy for
? is called pure if w(i)
?
the max player. Also, a strategy w
= 1 for some i.
Typically, one finds an optimal strategy for a two-player zero-sum game by solving a linear program.
However, the complexity of that approach scales with the size of the game matrix. In our case, the
game matrix G is huge, since it has as many columns as the number of deterministic policies in the
MDP\R.
Freund and Schapire [2] described a multiplicative weights algorithm for finding approximately
optimal strategies in games with large or even unknown game matrices. To apply their algorithm to
a game matrix G, it suffices to be able to efficiently perform the following two steps:
1. Given a min player strategy w, find arg max??? wT G?.
5
? T G? for each pure strategy w.
?
2. Given a max player strategy ?, compute w
Observe that these two steps are equivalent to the two steps of the projection algorithm from Section
3. Step 1 amounts to finding the optimal policy in a standard MDP with a known reward function.
There are a huge array of techniques available for this, such as value iteration and policy iteration.
Step 2 is the same as computing the feature expectations of a given policy. These can be computed
exactly by solving k systems of linear equations, or they can be approximated using iterative techniques. Importantly, the complexity of both steps scales with the size of the MDP\R, and not with
the size of the game matrix G.
Our Multiplicative Weights for Apprenticeship Learning (MWAL) algorithm is described below.
Lines 7 and 8 of the algorithm correspond to Steps 1 and 2 directly above. The algorithm is essentially the MW algorithm of Freund and Schapire [2], applied to a game matrix very similar to
G.4 We have also slightly extended their results to allow the MWAL algorithm, in lines 7 and 8, to
estimate the optimal policy and its feature expectations, rather than requiring that they be computed
exactly.
Algorithm 1 The MWAL algorithm
?E.
1: Given: An MDP\R M and an estimate of the expert?s feature expectations ?
?1
q
k
2: Let ? = 1 + 2 ln
.
T
e ?) , ((1 ? ?)(?(i) ? ?
? E (i)) + 2)/4, where ? ? Rk .
3: Define G(i,
(1)
4: Initialize W (i) = 1 for i = 1, . . . , k.
5: for t = 1, . . . , T do
Set w(t) (i) =
6:
(t)
(i)
PW (t)
W
(i)
i
for i = 1, . . . , k.
?(s).
7:
Compute an P -optimal policy ?
? (t) for M with respect to reward function R(s) = w(t) ??
(t)
(t)
(t)
? of ? = ?(?
8:
Compute an F -good estimate ?
? ).
e ?
? (t) )) for i = 1, . . . , k.
9:
W(t+1) (i) = W(t) (i) ? exp(ln(?) ? G(i,
10: end for
11: Post-processing: Return the mixed policy ? that assigns probability T1 to ?
? (t) , for all t ?
{1, . . . , T }.
Theorem 2 below provides a performance guarantee for the mixed policy ? returned by the MWAL
algorithm, relative to the performance of the expert and the game value v ? . Its correctness is largely
based on the main result in Freund and Schapire [2]. A proof is available in the supplement [4].
Theorem 2. Given an MDP\R M , and m independent trajectories from an expert?s policy ?E .
Suppose we execute the MWAL algorithm for T iterations. Let ? be the mixed policy returned
by the algorithm. Let F and P be the approximation errors from lines 7 and 8 of the algorithm. Let H ? (1/(1 ? ?)) ln(1/(H (1 ? ?))) be the length of each sample trajectory.
Let R = minw?Sk maxs |R? (s) ? w ? ? (s)| be the representation error of the features. Let
v ? = max??? minw?Sk [w ? ?(?) ? w ? ?E ] be the game value. Then in order for
V (?) ? V (?E ) + v ? ?
to hold with probability at least 1 ? ?, it suffices that
9 ln k
T ?
2(0 (1 ? ?))2
2
2k
m ?
ln
(0 (1 ? ?))2
?
(7)
(8)
(9)
(10)
where
0 ?
? (2F + P + 2H + 2R /(1 ? ?))
.
3
(11)
4
e in Algorithm 1, in contrast to G in Eq. (4), depends on ?
? E instead of ?E . This is because
Note that G
e and G are of no real consequence,
?E is unknown, and must be estimated. The other differences between G
and are further explained in the supplement [4].
6
Note the differences between Theorem 1 and Theorem 2. Because v ? ? 0, the guarantee of the
MWAL algorithm in (7) is at least as strong as the guarantee of the projection algorithm in (2), and
has the further benefit of being one-sided. Additionally, the iteration complexity of the MWAL algorithm is much lower. This not only implies a faster run time, but also implies that the mixed policy
output by the MWAL algorithm consists of fewer stationary policies. And if a purely stationary
policy is desired, it is not hard to show that the guarantee in (7) must hold for at least one of the
stationary polices in the mixed policy (this is also true of the projection algorithm [1]).
The sample complexity in the Theorem 2 is also lower, but we believe that this portion of our analysis applies to the projection algorithm as well [Abbeel, personal communication], so the MWAL
algorithm does not represent an improvement in this respect.
5.1
When no expert is available
Our game-playing approach can be very naturally and easily extended to the case where we do not
have data from an expert. Instead of finding a policy that maximizes Eq. (3), we find a policy ? ?
that maximizes
max min [w ? ?(?)] .
(12)
??? w?Sk
?
Here ? is the best policy for the worst-case possibility for w? . The MWAL algorithm can be
trivially adapted to find this policy just by setting ?E = 0 (compare (12) to (3)).
The following corollary follows straightforwardly from the proof of Theorem 2.
Corollary 1. Given an MDP\R M . Suppose we execute the ?no expert? version of the MWAL
algorithm for T iterations. Let ? be the mixed policy returned by the algorithm. Let F , P , R be
defined as in Theorem 2. Let v ? = max??? minw?Sk [w ? ?(?)]. Then
V (?) ? v ? ?
(13)
if
T ?
where
0 ?
5.2
9 ln k
? ?))2
2(0 (1
? (2F + P + 2R /(1 ? ?))
.
3
(14)
(15)
Representation error
Although the MWAL algorithm makes different assumptions about the domains of w and ? than the
projection algorithm, these differences are of no real consequence. The same class of reward functions can be expressed under either set of assumptions by roughly doubling the number of features.
Concretely, consider a feature function ? that satisfies the assumptions of the projection algorithm.
Then for each s, if ? (s) = (f1 , . . . , fk ), define ? 0 (s) = (f1 , . . . , fk , ?f1 , . . . , ?fk , 0). Observe that
? 0 satisfies the assumptions of the MWAL algorithm, and that minw?Bk maxs |R? (s) ? w ? ? (s)| ?
minw?S2k+1 maxs |R? (s) ? w ? ?0 (s)|. So by only doubling the number of features, we can ensure that the representation error R does not increase. Notably, employing this reduction forces the
game value v ? to be zero, ensuring that the MWAL algorithm, like the projection algorithm, will
mimic the expert. This obsevation provides us with some useful guidance for selecting features for
the MWAL algorithm: both the original and negated version of a feature should be used if we are
uncertain how that feature is correlated with reward.
6
When the transition function is unknown
In the previous sections, as well as in Abbeel and Ng [1], it was assumed that the transition function
?(s, a, ?) was known. In this section we sketch how to remove this assumption. Our approach to
applying the MWAL algorithm to this setting can be informally described as follows: Let M =
(S, A, ?, ?, ?) be the true MDP\R for which we are missing ?. Consider the MLE estimate ?b of ?
that is formed from the expert?s sample trajectories. Let Z ? S ? A be the set of state-action pairs
that are visited ?most frequently? by the expert. Then after observing enough trajectories, ?b will
7
cZ of M by using ?b to model
be an accurate estimate of ? on Z. We form a pessimistic estimate M
the transitions in Z, and route all other transitions to a special ?dead state.? Following Kearns and
cZ the induced
Singh [5], who used a very similar idea in their analyis of the E 3 algorithm, we call M
MDP\R on Z.
By a straightforward application of several technical lemmas due to Kearns and Singh [5] and
Abbeel and Ng [6], it is possible to show that if the number of expert trajectories m is at least
3
3
ln |S|?|A| + |S||A| ln 2|S||A|
O( |S|8|A|
), and we let Z be the set of state-action pairs visited by the
3
?
|S|2
|S|3 |A|
cZ in place of M in the MWAL algorithm will
expert at least O( 42 ln ) times, then using M
add only O() to the error bound in Theorem 2. More details are available in the supplement [4],
cZ .
including a precise procedure for constructing M
7
Experiments
For ease of comparison, we tested the MWAL algorithm and the projection algorithm in a car driving
simulator that resembled the experimental setup from Abbeel and Ng [1]. Videos of the experiments
discussed below are available in the supplement [4].
In our simulator, the apprentice must navigate a car through randomly-generated traffic on a threelane highway. We define three features for this environment: a collision feature (0 if contact with
another car, and 1/2 otherwise), an off-road feature (0 if on the grass, and 1/2 otherwise), and a
speed feature (1/2, 3/4 and 1 for each of the three possible speeds, with higher values corresponding
to higher speeds). Note that the features encode that, other things being equal, speed is good, and
collisions and off-roads are bad.
Speed
Collisions (per sec)
Off-roads (per sec)
Fast Expert
Fast
1.1
0
Proj
Fast
1.1
0
MWAL
Fast
0.5
0
Bad Expert
Slow
2.23
8.0
Proj.
Slow
2.23
8.0
MWAL
Medium
0
0
No Expert
-
MWAL
Medium
0
0
The table above displays the results of using the MWAL and projection algorithms to learn a driving
policy by observing two kinds of experts: a ?fast? expert (drives at the fastest speed; indifferent to
collisions), and a ?bad? expert (drives at the slowest speed; tries to hit cars and go off-road). In both
cases, the MWAL algorithm leverages information encoded in the features to produce a policy that
is significantly better than the expert?s policy.
We also applied the MWAL algorithm to the ?no expert? setting (see Section 5.1). In that case, it
produced a policy that drives as fast as possible without risking any collisions or off-roads. Given
our features, this is indeed the best policy for the worst-case choice of reward function.
Acknowledgments
We thank Pieter Abbeel for his helpful explanatory comments regarding his proofs. We also thank
the anonymous reviewers for their suggestions for additional experiments and other improvements.
This work was supported by the NSF under grant IIS-0325500.
References
[1] P. Abbeel, A. Ng (2004). Apprenticeship Learning via Inverse Reinforcement Learning. ICML 21.
[2] Y. Freund, R. E. Schapire (1999). Adaptive Game Playing Using Multiplicative Weights. Games and
Economic Behavior 29, 79?103.
[3] N. Ratliff, J. Bagnell, M. Zinkevich (2006). Maximum Margin Planning. ICML 23.
[4] U. Syed, R. E. Schapire (2007). ?A Game-Theoretic Approach to Apprenticeship Learning ? Supplement?. http://www.cs.princeton.edu/?usyed/nips2007/.
[5] M. Kearns, S. Singh (2002). Near-Optimal Reinforcement Learning in Polynomial Time. Machine Learning 49, 209?232.
[6] P. Abbeel, A. Ng (2005). Exploration and Apprenticeship Learning in Reinforcement Learning. ICML 22.
(Long version; available at http://www.cs.stanford.edu/?pabbeel/)
8
| 3293 |@word version:3 pw:1 polynomial:1 pieter:1 simulation:1 seek:1 invoking:1 reduction:1 initial:4 minmax:1 exclusively:1 selecting:2 tuned:1 ours:1 rightmost:1 yet:1 must:3 remove:2 grass:1 stationary:7 selected:1 fewer:2 ith:2 provides:3 simpler:2 si1:1 direct:1 consists:2 prove:1 apprenticeship:12 notably:1 indeed:2 expected:2 roughly:1 behavior:8 frequently:2 planning:1 simulator:2 discounted:1 actual:1 solver:1 provided:1 xx:1 moreover:2 tweaked:1 maximizes:4 bounded:1 linearity:1 begin:1 what:1 medium:2 kind:1 substantially:1 unified:1 finding:7 nj:2 guarantee:7 every:2 exactly:2 k2:4 hit:1 schwartz:1 grant:1 omit:1 t1:1 treat:1 consequence:2 approximately:1 usyed:2 ease:2 fastest:1 acknowledgment:1 practice:1 implement:1 procedure:1 elicit:1 thought:1 significantly:3 projection:19 word:2 road:5 close:1 operator:1 context:1 applying:1 www:2 equivalent:1 deterministic:6 reviewer:1 missing:1 maximizing:2 zinkevich:1 straightforward:1 regardless:1 go:1 simplicity:1 immediately:2 pure:2 assigns:1 array:1 importantly:2 dominate:1 his:2 play:2 suppose:5 expensive:2 approximated:1 observed:1 role:1 worst:4 environment:8 complexity:5 moderately:1 reward:34 personal:1 singh:3 solving:5 purely:1 swap:1 easily:1 various:2 fast:6 describe:2 sih:1 choosing:1 quite:2 whose:1 larger:2 encoded:2 stanford:1 say:3 otherwise:2 itself:1 advantage:2 sequence:1 poorly:1 achieve:1 description:1 neumann:1 produce:5 executing:1 illustrate:1 pose:1 minor:1 eq:10 strong:1 c:4 involves:1 implies:3 drawback:1 stochastic:1 exploration:1 argued:1 abbeel:22 fix:1 suffices:3 preliminary:1 f1:3 anonymous:1 pessimistic:1 mwal:25 hold:5 sufficiently:1 considered:1 exp:1 mapping:1 driving:3 achieves:2 visited:3 si0:1 sensitive:2 highway:1 correctness:1 always:1 rather:2 corollary:2 encode:1 improvement:3 slowest:1 contrast:1 sense:2 helpful:1 typically:2 explanatory:1 proj:2 mimicking:2 arg:2 issue:1 special:1 initialize:1 equal:3 once:1 saving:1 ng:20 adversarially:1 kw:3 icml:3 nearly:1 mimic:1 viation:1 randomly:2 simultaneously:1 replaced:1 consisting:1 ourselves:1 huge:2 possibility:3 indifferent:1 mixture:1 accurate:1 minw:6 unless:1 desired:2 guidance:1 theoretical:2 uncertain:1 column:1 maximization:1 conducted:1 straightforwardly:1 considerably:1 chooses:3 st:6 sensitivity:1 off:5 von:1 worse:1 dead:1 expert:53 return:2 toy:1 account:1 suggesting:1 sec:2 depends:4 multiplicative:6 view:1 try:1 observing:4 traffic:1 portion:1 parallel:1 formed:1 who:2 largely:2 likewise:2 efficiently:1 correspond:1 produced:1 trajectory:12 drive:3 definition:3 naturally:2 associated:1 proof:3 recall:1 knowledge:2 car:5 actually:2 higher:3 follow:2 specify:1 execute:3 though:3 done:1 just:4 risking:1 sketch:2 ei:2 believe:1 mdp:15 building:1 requiring:1 true:8 hence:2 assigned:1 game:34 noted:1 theoretic:3 performs:1 novel:1 common:1 empirically:1 qp:1 extend:1 discussed:3 theirs:1 outlined:1 trivially:1 fk:3 had:1 impressive:1 add:1 route:1 certain:2 inequality:3 kwk1:2 additional:2 somewhat:2 maximize:1 ii:1 technical:1 match:2 faster:3 long:2 post:3 mle:1 impact:1 ensuring:1 essentially:3 expectation:15 iteration:13 represent:2 cz:4 achieved:1 addition:1 affecting:1 unlike:1 strict:2 comment:2 induced:1 thing:2 call:1 mw:1 leverage:2 noting:1 near:1 enough:3 opposite:1 economic:1 idea:1 regarding:1 whether:1 expression:1 motivated:1 returned:4 cause:1 action:5 useful:2 collision:5 clear:2 covered:1 informally:1 amount:1 discount:1 schapire:9 specifies:1 http:2 nsf:1 notice:1 sign:1 estimated:1 correctly:3 per:2 key:1 thereafter:1 terminology:1 sum:5 run:2 inverse:1 extends:1 almost:2 place:1 decision:2 bound:5 guaranteed:2 nips2007:1 display:1 quadratic:1 nonnegative:2 badly:1 adapted:1 constraint:1 aspect:3 speed:7 extremely:2 optimality:1 min:11 department:2 numbering:1 according:2 combination:3 slightly:2 appealing:1 happens:1 explained:1 pr:1 sided:2 taken:1 computationally:2 ln:16 equation:1 turn:1 mind:1 end:1 available:8 rewritten:1 apply:2 observe:4 appropriate:1 apprentice:12 original:1 remaining:1 ensure:1 umar:1 exploit:1 k1:1 especially:1 establish:1 comparatively:1 contact:1 objective:1 added:1 quantity:1 strategy:8 usual:1 bagnell:1 thank:2 olden:2 collected:1 cauchy:1 trivial:1 reason:1 extent:1 assuming:2 length:4 relationship:1 demonstration:1 equivalently:1 difficult:2 setup:2 executed:1 robert:1 expense:1 ratliff:2 policy:82 unknown:13 perform:2 negated:1 upper:1 observation:2 markov:2 finite:2 behave:3 situation:2 extended:2 communication:1 precise:1 police:1 bk:3 pair:2 specified:1 componentwise:1 redefining:1 tremendous:1 address:1 able:5 usually:2 below:4 program:2 max:21 including:1 video:2 belief:1 syed:2 s2k:1 force:1 improve:1 misleading:1 inversely:1 faced:1 prior:2 review:1 relative:4 freund:6 mixed:19 suggestion:1 pabbeel:1 agent:4 s0:3 playing:4 supported:1 jth:1 formal:1 weaker:1 allow:1 benefit:1 transition:8 cumulative:2 ignores:1 concretely:1 refinement:1 reinforcement:3 adaptive:1 far:2 employing:1 approximate:1 observable:1 assumed:4 iterative:1 sk:10 table:1 additionally:2 learn:2 improving:1 necessarily:3 constructing:1 domain:3 did:1 main:2 motivation:1 repeated:1 slow:2 explicit:1 rk:4 theorem:16 bad:4 resembled:1 navigate:1 sit:1 exists:1 restricting:1 supplement:5 margin:1 easier:2 suited:1 explore:1 infinitehorizon:1 expressed:1 doubling:2 applies:1 satisfies:2 worstcase:1 goal:11 presentation:1 viewed:1 formulated:2 exposition:1 absence:1 hard:2 change:1 specifically:3 averaging:1 wt:4 kearns:3 conservative:2 called:6 lemma:1 experimental:3 player:14 incorporate:1 princeton:7 tested:1 correlated:1 |
2,529 | 3,294 | Modeling homophily and stochastic equivalence in
symmetric relational data
Peter D. Hoff
Departments of Statistics and Biostatistics
University of Washington
Seattle, WA 98195-4322.
hoff@stat.washington.edu
Abstract
This article discusses a latent variable model for inference and prediction of symmetric relational data. The model, based on the idea of the eigenvalue decomposition, represents the relationship between two nodes as the weighted inner-product
of node-specific vectors of latent characteristics. This ?eigenmodel? generalizes
other popular latent variable models, such as latent class and distance models: It is
shown mathematically that any latent class or distance model has a representation
as an eigenmodel, but not vice-versa. The practical implications of this are examined in the context of three real datasets, for which the eigenmodel has as good or
better out-of-sample predictive performance than the other two models.
1
Introduction
Let {yi,j : 1 ? i < j ? n} denote data measured on pairs of a set of n objects or nodes. The
examples considered in this article include friendships among people, associations among words
and interactions among proteins. Such measurements are often represented by a sociomatrix Y ,
which is a symmetric n ? n matrix with an undefined diagonal. One of the goals of relational data
analysis is to describe the variation among the entries of Y , as well as any potential covariation of
Y with observed explanatory variables X = {xi,j , 1 ? i < j ? n}.
To this end, a variety of statistical models have been developed that describe yi,j as some function
of node-specific latent variables ui and uj and a linear predictor ? T xi,j . In such formulations,
{u1 , . . . , un } represent across-node variation in the yi,j ?s and ? represents covariation of the yi,j ?s
with the xi,j ?s. For example, Nowicki and Snijders [2001] present a model in which each node i
is assumed to belong to an unobserved latent class ui , and a probability distribution describes the
relationships between each pair of classes (see Kemp et al. [2004] and Airoldi et al. [2005] for recent
extensions of this approach). Such a model captures stochastic equivalence, a type of pattern often
seen in network data in which the nodes can be divided into groups such that members of the same
group have similar patterns of relationships.
An alternative approach to representing across-node variation is based on the idea of homophily, in
which the relationships between nodes with similar characteristics are stronger than the relationships
between nodes having different characteristics. Homophily provides an explanation to data patterns
often seen in social networks, such as transitivity (?a friend of a friend is a friend?), balance (?the
enemy of my friend is an enemy?) and the existence of cohesive subgroups of nodes. In order to
represent such patterns, Hoff et al. [2002] present a model in which the conditional mean of yi,j is a
function of ? 0 xi,j ? |ui ? uj |, where {u1 , . . . , un } are vectors of unobserved, latent characteristics
in a Euclidean space. In the context of binary relational data, such a model predicts the existence
of more transitive triples, or ?triangles,? than would be seen under a random allocation of edges
among pairs of nodes. An important assumption of this model is that two nodes with a strong
1
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Figure 1: Networks exhibiting homophily (left panel) and stochastic equivalence (right panel).
relationship between them are also similar to each other in terms of how they relate to other nodes:
A strong relationship between i and j suggests |ui ? uj | is small, but this further implies that
|ui ? uk | ? |uj ? uk |, and so nodes i and j are assumed to have similar relationships to other nodes.
The latent class model of Nowicki and Snijders [2001] and the latent distance model of Hoff et al.
[2002] are able to identify, respectively, classes of nodes with similar roles, and the locational properties of the nodes. These two items are perhaps the two primary features of interest in social network
and relational data analysis. For example, discussion of these concepts makes up more than half of
the 734 pages of main text in Wasserman and Faust [1994]. However, a model that can represent
one feature may not be able to represent the other: Consider the two graphs in Figure 1. The graph
on the left displays a large degree of transitivity, and can be well-represented by the latent distance
model with a set of vectors {u1 , . . . , un } in two-dimensional space, in which the probability of an
edge between i and j is decreasing in |ui ? uj |. In contrast, representation of the graph by a latent
class model would require a large number of classes, none of which would be particularly cohesive
or distinguishable from the others. The second panel of Figure 1 displays a network involving three
classes of stochastically equivalent nodes, two of which (say A and B) have only across-class ties,
and one (C) that has both within- and across-class ties. This graph is well-represented by a latent
class model in which edges occur with high probability between pairs having one member in each
of A and B or in B and C, and among pairs having both members in C (in models of stochastic
equivalence, nodes within each class are not differentiated). In contrast, representation of this type
of graph with a latent distance model would require the dimension of the latent characteristics to be
on the order of the class membership sizes.
Many real networks exhibit combinations of structural equivalence and homophily in varying degrees. In these situations, use of either the latent class or distance model would only be representing
part of the network structure. The goal of this paper is to show that a simple statistical model based
on the eigenvalue decomposition can generalize the latent class and distance models: Just as any
symmetric matrix can be approximated with a subset of its largest eigenvalues and corresponding
eigenvectors, the variation in a sociomatrix can be represented by modeling yi,j as a function of
? 0 xi,j + uTi ?uj , where {u1 , . . . , un } are node-specific factors and ? is a diagonal matrix. In this
article, we show mathematically and by example how this eigenmodel can represent both stochastic
equivalence and homophily in symmetric relational data, and thus is more general than the other two
latent variable models.
The next section motivates the use of latent variables models for relational data, and shows mathematically that the eigenmodel generalizes the latent class and distance models in the sense that it can
compactly represent the same network features as these other models but not vice-versa. Section 3
compares the out-of-sample predictive performance of these three models on three different datasets:
a social network of 12th graders; a relational dataset on word association counts from the first chapter of Genesis; and a dataset on protein-protein interactions. The first two networks exhibit latent
homophily and stochastic equivalence respectively, whereas the third shows both to some degree.
2
In support of the theoretical results of Section 2, the latent distance and class models perform well
for the first and second datasets respectively, whereas the eigenmodel performs well for all three.
Section 4 summarizes the results and discusses some extensions.
2
2.1
Latent variable modeling of relational data
Justification of latent variable modeling
The use of probabilistic latent variable models for the representation of relational data can be motivated in a natural way: For undirected data without covariate information, symmetry suggests that
any probability model we consider should treat the nodes as being exchangeable, so that
Pr({yi,j : 1 ? i < j ? n} ? A) = Pr({y?i,?j : 1 ? i < j ? n} ? A)
for any permutation ? of the integers {1, . . . , n} and any set of sociomatrices A. Results of Hoover
[1982] and Aldous [1985, chap. 14] show that if a model satisfies the above exchangeability condition for each integer n, then it can be written as a latent variable model of the form
yi,j = h(?, ui , uj , i,j )
(1)
for i.i.d. latent variables {u1 , . . . , un }, i.i.d. pair-specific effects {i,j : 1 ? i < j ? n} and some
function h that is symmetric in its second and third arguments. This result is very general - it says
that any statistical model for a sociomatrix in which the nodes are exchangeable can be written as a
latent variable model.
Difference choices of h lead to different models for y. A general probit model for binary network
data can be put in the form of (1) as follows:
{i,j : 1 ? i < j ? n} ? i.i.d. normal(0, 1)
{u1 , . . . , un } ? i.i.d. f (u|?)
yi,j = h(?, ui , uj , i,j ) = ?(0,?) (? + ?(ui , uj ) + i,j ),
where ? and ? are parameters to be estimated, and ? is a symmetric function, also potentially
involving parameters to be estimated. Covariation between Y and an array of predictor variables
X can be represented by adding a linear predictor ? T xi,j to ?. Finally, integrating over i,j we
obtain Pr(yi,j = 1|xi,j , ui , uj ) = ?[? + ? T xi,j + ?(ui , uj )]. Since the i,j ?s can be assumed to be
independent, the conditional probability of Y given X and {u1 , . . . , un } can be expressed as
?[? + ? T xi,j + ?(ui , uj )]
Y y
?i,ji,j (1 ? ?i,j )yi,j
=
Pr(yi,j = 1|xi,j , ui , uj ) ? ?i,j
=
Pr(Y |X, u1 , . . . , un )
(2)
i<j
Many relational datasets have ordinal, non-binary measurements (for example, the word association
data in Section 3.2). Rather than ?thresholding? the data to force it to be binary, we can make use of
the full information in the data with an ordered probit version of (2):
(y)
Pr(yi,j = y|xi,j , ui , uj ) ? ?i,j
Pr(Y |X, u1 , . . . , un )
?[?y + ? T xi,j + ?(ui , uj )] ? ?[?y+1 + ? T xi,j + ?(ui , uj )]
Y (y )
?i,ji,j ,
=
=
i<j
where {?y } are parameters to be estimated for all but the lowest value y in the sample space.
2.2
Effects of nodal variation
The latent variable models described in the Introduction correspond to different choices for the
symmetric function ?:
Latent class model:
?(ui , uj ) = mui ,uj
ui ? {1, . . . , K}, i ? {1, . . . , n}
3
M a K ? K symmetric matrix
Latent distance model:
?(ui , uj ) = ?|ui ? uj |
ui ? RK , i ? {1, . . . , n}
Latent eigenmodel:
?(ui , uj ) = uTi ?uj
ui ? RK , i ? {1, . . . , n}
? a K ? K diagonal matrix.
Interpretations of the latent class and distance models were given in the Introduction. An interpretation of the latent eigenmodel is that each node i has a vector of unobserved characteristics
ui = {ui,1 , . . . , ui,K }, and that similar values of ui,k and uj,k will contribute positively or negatively to the relationship between i and j, depending on whether ?k > 0 or ?k < 0. In this way,
the model can represent both positive or negative homophily in varying degrees, and stochastically
equivalent nodes (nodes with the same or similar latent vectors) may or may not have strong relationships with one another.
We now show that the eigenmodel generalizes the latent class and distance models: Let Sn be the
set of n ? n sociomatrices, and let
CK
= {C ? Sn : ci,j = mui ,uj , ui ? {1, . . . , K}, M a K ? K symmetric matrix};
DK
= {D ? Sn : di,j = ?|ui ? uj |, ui ? RK };
EK
= {E ? Sn : ei,j = uTi ?uj , ui ? RK , ? a K ? K diagonal matrix}.
In other words, CK is the set of possible values of {?(ui , uj ), 1 ? i < j ? n} under a Kdimensional latent class model, and similarly for DK and EK .
EK generalizes CK : Let C ? CK and let C? be a completion of C obtained by setting ci,i = mui ,ui .
There are at most K unique rows of C? and so C? is of rank K at most. Since the set EK contains
all sociomatrices that can be completed as a rank-K matrix, we have CK ? EK . Since EK includes
matrices with n unique rows, CK ? EK unless K ? n in which case the two sets are equal.
EK+1 weakly generalizes DK : Let D ? DK . Such a (negative) distance matrix will generally be
of full rank, in which case it cannot be represented exactly by an E ? EK for K < n. However,
what is critical from a modeling perspective is whether or not the order of the entries of each D can
be matched by the order of the entries of an E. This is because the probit and ordered probit model
we are considering include threshold variables {?y : y ? Y} which can be adjusted to accommodate
monotone transformations of ?(ui , uj ). With this in mind, note that the matrix of squared distances
among a set of K-dimensional vectors {z1 , . . . , zn } is a monotonic transformation of the distances,
is of rank K + 2 or less (as D2 = [z10 z1p
, . . . , zn0 zn ]T 1T + 1[z10 z1 , . . . , zn0 zn ] ? 2ZZ T ) and so is
in EK+2 . Furthermore,
letting ui = (zi , r2 ? ziT zi ) ? RK+1 for each i ? {1, . . . , n}, we have
p
0
0
2
ui uj = zi zj + (r ? |ui |2 )(r2 ? |uj |2 ). For large r this is approximately r2 ? |zi ? zj |2 /2, which
is an increasing function of the negative distance di,j . For large enough r the numerical order of the
entries of this E ? EK+1 is the same as that of D ? DK .
DK does not weakly generalize E1 : Consider E ? E1 generated by ? = 1, u1 = 1 and ui =
r < 1 for i > 1. Then r = e1,i1 = e1,i2 > ei1 ,i2 = r2 for all i1 , i2 6= 1. For which K is such an
ordering of the elements of D ? DK possible? If K = 1 then such an ordering is possible only if
n = 3. For K = 2 such an ordering is possible for n ? 6. This is because the kissing number in
R2 , or the number of non-overlapping spheres of unit radius that can simultaneously touch a central
sphere of unit radius, is 6. If we put node 1 at the center of the central sphere, and 6 nodes at the
centers of the 6 kissing spheres, then we have d1,i1 = d1,i2 = di1 ,i2 for all i1 , i2 6= 1. We can only
have d1,i1 = d1,i2 > di1 ,i2 if we remove one of the non-central spheres to allow for more room
between those remaining, leaving one central sphere plus five kissing spheres for a total of n = 6.
Increasing n increases the necessary dimension of the Euclidean space, and so for any K there are
n and E ? E1 that have entry orderings that cannot be matched by those of any D ? DK .
4
A less general positive semi-definite version of the eigenmodel has been studied by Hoff [2005],
in which ? was taken to be the identity matrix. Such a model can weakly generalize a distance
model, but cannot generalize a latent class model, as the eigenvalues of a latent class model could
be negative.
3
3.1
Model comparison on three different datasets
Parameter estimation
Bayesian parameter estimation for the three models under consideration can be achieved via Markov
chain Monte Carlo (MCMC) algorithms, in which posterior distributions for the unknown quantities
are approximated with empirical distributions of samples from a Markov chain. For these algorithms, it is useful to formulate the probit models described in Section 2.1 in terms of an additional
latent variable zi,j ? normal[? 0 xi,j + ?(ui , uj )], for which yi,j = y if ?y < zi,j < ?y+1 . Using
conjugate prior distributions where possible, the MCMC algorithms proceed by generating a new
(s+1)
(s+1)
state ?(s+1) = {Z (s+1) , ?(s+1) , ? (s+1) , u1
, . . . , un
} from a current state ?(s) as follows:
1.
2.
3.
4.
For each {i, j}, sample zi,j from its (constrained normal) full conditional distribution.
For each y ? Y, sample ?y from its (normal) full conditional distribution.
Sample ? from its (multivariate normal) full conditional distribution.
Sample u1 , . . . , un and their associated parameters:
? For the latent distance model, propose and accept or reject new values of the ui ?s with
the Metropolis algorithm, and then sample the population variances of the ui ?s from
their (inverse-gamma) full conditional distributions.
? For the latent class model, update each class variable ui from its (multinomial) conditional distribution given current values of Z, {uj : j 6= i} and the variance of the
elements of M (but marginally over M to improve mixing). Then sample the elements
of M from their (normal) full conditional distributions and the variance of the entries
of M from its (inverse-gamma) full conditional distribution.
? For the latent vector model, sample each ui from its (multivariate normal) full conditional distribution, sample the mean of the ui ?s from their (normal) full conditional
distributions, and then sample ? from its (multivariate normal) full conditional distribution.
To facilitate comparison across models, we used prior distributions in which the level of prior variability in ?(ui , uj ) was similar across the three different models (further details and code to implement these algorithms are available at my website).
3.2
Cross validation
To compare the performance of these three different models we evaluated their out-of-sample predictive performance under a range of dimensions (K ? {3, 5, 10}) and on three different datasets
exhibiting varying combinations of homophily and stochastic equivalence. For each combination of
dataset, dimension and model we performed a five-fold cross validation experiment as follows:
1. Randomly divide the n2 data values into 5 sets of roughly equal size, letting si,j be the set
to which pair {i, j} is assigned.
2. For each s ? {1, . . . , 5}:
(a) Obtain posterior distributions of the model parameter conditional on {yi,j : si,j 6= s},
the data on pairs not in set s.
(b) For pairs {k, l} in set s, let y?k,l = E[yk,l |{yi,j : si,j 6= s}], the posterior predictive
mean of yk,l obtained using data not in set s.
This procedure generates a sociomatrix Y? , in which each entry y?i,j represents a predicted value
obtained from using a subset of the data that does not include yi,j . Thus Y? is a sociomatrix of
out-of-sample predictions of the observed data Y .
5
3
5
10
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
?
? ? ?
?
?
?
?
?
?
?
?
? ? ?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
? ?
?
?
distance
class
vector
?
? ?
?
0
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
? ?
?
? ? ? ?
?
?
? ?
? ?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
? ? ?
?
?
??
?
?
? ??
??
?
?
?
?
?
?
?
?
?
?
?
? ? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
??
?
? ?
?
?
? ?
?
?
?
? ?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
true positives
100
200
300
K
400
Table 1: Cross validation results and area under the ROC curves.
Add health
Genesis
Protein interaction
dist class eigen dist class eigen dist class eigen
0.82 0.64
0.75 0.62 0.82
0.82 0.83 0.79
0.88
0.81 0.70
0.78 0.66 0.82
0.82 0.84 0.84
0.90
0.76 0.69
0.80 0.74 0.82
0.82 0.85 0.86
0.90
0 5000
15000
25000
false positives
Figure 2: Social network data and unscaled ROC curves for the K = 3 models.
3.3
Adolescent Health social network
The first dataset records friendship ties among 247 12th-graders, obtained from the National Longitudinal Study of Adolescent Health (www.cpc.unc.edu/projects/addhealth). For these data,
yi,j = 1 or 0 depending on whether or not there is a close friendship tie between student i and j
(as reported by either i or j). These data are represented as an undirected graph in the first panel of
Figure 2. Like many social networks, these data exhibit a good deal of transitivity. It is therefore not
surprising that the best performing models considered (in terms of area under the ROC curve, given
in Table 1) are the distance models, with the eigenmodels close behind. In contrast, the latent class
models perform poorly, and the results suggest that increasing K for this model would not improve
its performance.
3.4
Word neighbors in Genesis
The second dataset we consider is derived from word and punctuation counts in the first chapter
of the King James version of Genesis (www.gutenberg.org/dirs/etext05/bib0110.txt).
There are 158 unique words and punctuation marks in this chapter, and for our example we take
yi,j to be the number of times that word i and word j appear next to each other (a model extension,
appropriate for an asymmetric version of this dataset, is discussed in the next section). These data
can be viewed as a graph with weighted edges, the unweighted version of which is shown in the
first panel of Figure 3. The lack of a clear spatial representation of these data is not unexpected,
as text data such as these do not have groups of words with strong within-group connections, nor
do they display much homophily: a given noun may appear quite frequently next to two different
verbs, but these verbs will not appear next to each other. A better description of these data might be
that there are classes of words, and connections occur between words of different classes. The cross
validation results support this claim, in that the latent class model performs much better than the
distance model on these data, as seen in the second panel of Figure 3 and in Table 1. As discussed in
the previous section, the eigenmodel generalizes the latent class model and performs equally well.
6
400
true positives
100
200
300
make
one
two
us
man
saying
without
place
whales
formgreat
gathered
their
unto
behold
haditself fruitful
own very
wherein
seasons
likeness his kind
days
our
together
signs
said
lights whose
made
appear grass
created
good
set
meat
soin
gathering
was , there
be for blessed
image
shall
broughtfinishedsea : afterhe let
them
life i
seed
land
seas
female
bringforth earth
is
air
multiply
god
moveth
waters
bearing
night
a
moved
abundantly
which
firmament
.called
darkness
it yielding hath
cattle
fruitupon
; tree
dry open
creepeth
heaven
of
under
light
that
herb
fowl
from thehim
have
also deep
and thingsubdue
were
saw
divided
thushost day beginning
creature
male you
divide
spirit
stars
every
to
given
morningfish
beast
all livingvoid
creepinggreen
greater
sixth face
midst
replenish
eveningover winged
third lesser
first
heavens
fill
dominion
second
fifth
give
years
moving rule
fourth
above
may
distance
class
vector
0
fly
0
4000
8000
false positives
12000
Figure 3: Relational text data from Genesis and unscaled ROC curves for the K = 3 models.
We note that parameter estimates for these data were obtained using the ordered probit versions of
the models (as the data are not binary), but the out-of-sample predictive performance was evaluated
based on each model?s ability to predict a non-zero relationship.
3.5
Protein-protein interaction data
Our last example is the protein-protein interaction data of Butland et al. [2005], in which yi,j = 1
if proteins i and j bind and yi,j = 0 otherwise. We analyze the large connected component of this
graph, which includes 230 proteins and is displayed in the first panel of 4. This graph indicates
patterns of both stochastic equivalence and homophily: Some nodes could be described as ?hubs?,
connecting to many other nodes which in turn do not connect to each other. Such structure is better
represented by a latent class model than a distance model. However, most nodes connecting to hubs
generally connect to only one hub, which is a feature that is hard to represent with a small number
of latent classes. To represent this structure well, we would need two latent classes per hub, one for
the hub itself and one for the nodes connecting to the hub. Furthermore, the core of the network
(the nodes with more than two connections) displays a good degree of homophily in the form of
transitive triads, a feature which is easiest to represent with a distance model. The eigenmodel is
able to capture both of these data features and performs better than the other two models in terms of
out-of-sample predictive performance. In fact, the K = 3 eigenmodel performs better than the other
two models for any value of K considered.
4
Discussion
Latent distance and latent class models provide concise, easily interpreted descriptions of social
networks and relational data. However, neither of these models will provide a complete picture of
relational data that exhibit degrees of both homophily and stochastic equivalence. In contrast, we
have shown that a latent eigenmodel is able to represent datasets with either or both of these data
patterns. This is due to the fact that the eigenmodel provides an unrestricted low-rank approximation
to the sociomatrix, and is therefore able to represent a wide array of patterns in the data.
The concept behind the eigenmodel is the familiar eigenvalue decomposition of a symmetric matrix. The analogue for directed networks or rectangular matrix data would be a model based on the
singular value decomposition, in which data yi,j could be modeled as depending on uTi Dvj , where
ui and vj represent vectors of latent row and column effects respectively. Statistical inference using
the singular value decomposition for Gaussian data is straightforward. A model-based version of
7
?
?
?
?
?
?
?
? ?
??
?
?
?
?
?
?
true positives
300
500
?
?
?
?
?
? ?
? ?
?? ? ? ?
?
700
?
?
? ?
?
?
?
?
?
?
? ?
?
?
? ?
?
?
?
?
?
?
? ? ?? ? ?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??? ?
?
? ?? ? ?
?
?
??? ? ???
?
? ? ?
? ? ???? ??
?
?
? ? ?
?? ? ? ?
?
?
?
?
? ??
?
?
?
?
?
? ??
??
? ? ?
?
?
? ?
?
?
?
?
?
?
?
?? ?
?
?
?
?
? ?
?
?
?? ?
?
?
? ?
?
?
??
?
?
? ?
? ?
? ? ?
?? ?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
? ?
?
? ??
?
?
?
?
?
?
?
?
? ?
?
?
?
distance
class
vector
0 100
?
?
0
5000
15000
false positives
25000
Figure 4: Protein-protein interaction data and unscaled ROC curves for the K = 3 models.
the approach for binary and other non-Gaussian relational datasets could be implemented using the
ordered probit model discussed in this paper.
Acknowledgment
This work was partially funded by NSF grant number 0631531.
References
Edoardo Airoldi, David Blei, Eric Xing, and Stephen Fienberg. A latent mixed membership model
for relational data. In LinkKDD ?05: Proceedings of the 3rd international workshop on Link
discovery, pages 82?89, New York, NY, USA, 2005. ACM Press. ISBN 1-59593-215-1. doi:
http://doi.acm.org/10.1145/1134271.1134283.
?
David J. Aldous. Exchangeability and related topics. In Ecole
d??et?e de probabilit?es de Saint-Flour,
XIII?1983, volume 1117 of Lecture Notes in Math., pages 1?198. Springer, Berlin, 1985.
G. Butland, J. M. Peregrin-Alvarez, J. Li, W. Yang, X. Yang, V. Canadien, A. Starostine, D. Richards,
B. Beattie, N. Krogan, M. Davey, J. Parkinson, J. Greenblatt, and A. Emili. Interaction network
containing conserved and essential protein complexes in escherichia coli. Nature, 433:531?537,
2005.
Peter D. Hoff. Bilinear mixed-effects models for dyadic data. J. Amer. Statist. Assoc., 100(469):
286?295, 2005. ISSN 0162-1459.
Peter D. Hoff, Adrian E. Raftery, and Mark S. Handcock. Latent space approaches to social network
analysis. J. Amer. Statist. Assoc., 97(460):1090?1098, 2002. ISSN 0162-1459.
D. N. Hoover. Row-column exchangeability and a generalized model for probability. In Exchangeability in probability and statistics (Rome, 1981), pages 281?291. North-Holland, Amsterdam,
1982.
Charles Kemp, Thomas L. Griffiths, and Joshua B. Tenenbaum. Discovering latent classes in relational data. AI Memo 2004-019, Massachusetts Institute of Technology, 2004.
Krzysztof Nowicki and Tom A. B. Snijders. Estimation and prediction for stochastic blockstructures.
J. Amer. Statist. Assoc., 96(455):1077?1087, 2001. ISSN 0162-1459.
Stanley Wasserman and Katherine Faust. Social Network Analysis: Methods and Applications.
Cambridge University Press, Cambridge, 1994.
8
| 3294 |@word version:7 stronger:1 open:1 adrian:1 d2:1 decomposition:5 concise:1 accommodate:1 contains:1 ecole:1 longitudinal:1 current:2 abundantly:1 surprising:1 si:3 written:2 numerical:1 remove:1 update:1 grass:1 half:1 discovering:1 website:1 item:1 beginning:1 core:1 record:1 blei:1 provides:2 math:1 node:33 contribute:1 org:2 five:2 nodal:1 mui:3 roughly:1 dist:3 nor:1 frequently:1 decreasing:1 chap:1 linkkdd:1 considering:1 increasing:3 project:1 matched:2 panel:7 biostatistics:1 lowest:1 what:1 easiest:1 kind:1 interpreted:1 developed:1 unobserved:3 transformation:2 every:1 tie:4 exactly:1 assoc:3 uk:2 exchangeable:2 unit:2 grant:1 appear:4 positive:8 bind:1 treat:1 bilinear:1 approximately:1 might:1 plus:1 studied:1 examined:1 equivalence:10 suggests:2 escherichia:1 range:1 directed:1 practical:1 unique:3 acknowledgment:1 definite:1 implement:1 procedure:1 probabilit:1 area:2 empirical:1 reject:1 word:12 integrating:1 griffith:1 protein:13 suggest:1 cannot:3 unc:1 close:2 put:2 context:2 darkness:1 www:2 equivalent:2 fruitful:1 center:2 straightforward:1 adolescent:2 rectangular:1 formulate:1 wasserman:2 rule:1 array:2 fill:1 his:1 population:1 variation:5 justification:1 element:3 approximated:2 particularly:1 asymmetric:1 richards:1 predicts:1 observed:2 role:1 fly:1 capture:2 connected:1 triad:1 ordering:4 yk:2 ui:46 weakly:3 predictive:6 negatively:1 eric:1 triangle:1 compactly:1 easily:1 represented:8 chapter:3 describe:2 monte:1 doi:2 quite:1 whose:1 enemy:2 faust:2 say:2 otherwise:1 ability:1 statistic:2 god:1 emili:1 itself:1 eigenvalue:5 isbn:1 propose:1 interaction:7 product:1 mixing:1 poorly:1 description:2 moved:1 seattle:1 sea:1 generating:1 object:1 depending:3 friend:4 completion:1 stat:1 measured:1 zit:1 strong:4 implemented:1 predicted:1 implies:1 exhibiting:2 radius:2 stochastic:10 cpc:1 require:2 hoover:2 mathematically:3 adjusted:1 extension:3 considered:3 normal:9 seed:1 predict:1 claim:1 earth:1 estimation:3 saw:1 largest:1 vice:2 zn0:2 weighted:2 gaussian:2 rather:1 ck:6 season:1 parkinson:1 exchangeability:4 varying:3 derived:1 rank:5 indicates:1 contrast:4 sense:1 inference:2 membership:2 explanatory:1 accept:1 i1:5 among:8 sociomatrix:6 constrained:1 spatial:1 noun:1 hoff:7 equal:2 having:3 washington:2 zz:1 whale:1 represents:3 fowl:1 others:1 di1:2 xiii:1 randomly:1 simultaneously:1 gamma:2 national:1 familiar:1 addhealth:1 interest:1 multiply:1 flour:1 male:1 punctuation:2 yielding:1 undefined:1 behind:2 light:2 chain:2 implication:1 heaven:2 edge:4 necessary:1 unless:1 tree:1 euclidean:2 divide:2 theoretical:1 column:2 modeling:5 herb:1 zn:3 entry:7 subset:2 predictor:3 gutenberg:1 reported:1 connect:2 my:2 dvj:1 international:1 probabilistic:1 kdimensional:1 together:1 connecting:3 squared:1 central:4 containing:1 stochastically:2 ek:11 coli:1 li:1 potential:1 de:2 star:1 student:1 includes:2 north:1 performed:1 analyze:1 xing:1 air:1 variance:3 characteristic:6 correspond:1 identify:1 gathered:1 dry:1 generalize:4 bayesian:1 none:1 carlo:1 marginally:1 sixth:1 james:1 associated:1 di:2 dataset:6 popular:1 covariation:3 massachusetts:1 stanley:1 day:2 tom:1 wherein:1 alvarez:1 formulation:1 grader:2 evaluated:2 amer:3 furthermore:2 just:1 night:1 ei:1 touch:1 overlapping:1 lack:1 perhaps:1 facilitate:1 effect:4 usa:1 concept:2 true:3 assigned:1 symmetric:11 nowicki:3 i2:8 deal:1 cohesive:2 transitivity:3 generalized:1 complete:1 performs:5 image:1 likeness:1 consideration:1 charles:1 kissing:3 multinomial:1 ji:2 homophily:13 volume:1 association:3 belong:1 interpretation:2 discussed:3 measurement:2 versa:2 cambridge:2 ai:1 rd:1 similarly:1 handcock:1 funded:1 moving:1 add:1 posterior:3 multivariate:3 recent:1 own:1 perspective:1 aldous:2 female:1 binary:6 life:1 yi:22 joshua:1 conserved:1 seen:4 additional:1 greater:1 unrestricted:1 semi:1 stephen:1 full:11 snijders:3 cross:4 sphere:7 divided:2 e1:5 equally:1 prediction:3 involving:2 txt:1 represent:13 achieved:1 whereas:2 singular:2 leaving:1 undirected:2 member:3 spirit:1 integer:2 structural:1 yang:2 enough:1 variety:1 zi:7 inner:1 idea:2 lesser:1 whether:3 motivated:1 edoardo:1 peter:3 proceed:1 york:1 deep:1 generally:2 useful:1 clear:1 eigenvectors:1 tenenbaum:1 statist:3 http:1 zj:2 nsf:1 sign:1 estimated:3 per:1 shall:1 group:4 threshold:1 neither:1 krzysztof:1 graph:9 monotone:1 year:1 inverse:2 you:1 fourth:1 place:1 saying:1 blockstructures:1 uti:4 summarizes:1 cattle:1 display:4 fold:1 occur:2 locational:1 generates:1 u1:12 argument:1 performing:1 department:1 combination:3 conjugate:1 across:6 describes:1 beast:1 metropolis:1 pr:7 gathering:1 taken:1 fienberg:1 discus:2 count:2 turn:1 ordinal:1 mind:1 letting:2 end:1 peregrin:1 generalizes:6 available:1 eigenmodel:16 z10:2 differentiated:1 appropriate:1 z1p:1 alternative:1 eigen:3 existence:2 thomas:1 remaining:1 include:3 completed:1 saint:1 uj:33 dominion:1 quantity:1 primary:1 diagonal:4 said:1 exhibit:4 distance:26 link:1 berlin:1 ei1:1 topic:1 kemp:2 water:1 code:1 issn:3 modeled:1 relationship:11 balance:1 katherine:1 potentially:1 relate:1 negative:4 memo:1 motivates:1 unknown:1 perform:2 datasets:8 markov:2 displayed:1 situation:1 relational:17 variability:1 genesis:5 rome:1 verb:2 david:2 pair:9 z1:2 connection:3 subgroup:1 able:5 pattern:7 explanation:1 analogue:1 critical:1 natural:1 force:1 representing:2 improve:2 technology:1 picture:1 greenblatt:1 created:1 raftery:1 transitive:2 health:3 sn:4 text:3 prior:3 discovery:1 probit:7 lecture:1 permutation:1 mixed:2 allocation:1 triple:1 validation:4 degree:6 article:3 thresholding:1 unscaled:3 land:1 row:4 last:1 allow:1 institute:1 neighbor:1 wide:1 face:1 fifth:1 davey:1 curve:5 dimension:4 unweighted:1 unto:1 made:1 social:9 meat:1 assumed:3 krogan:1 xi:14 un:11 latent:56 table:3 nature:1 symmetry:1 bearing:1 complex:1 vj:1 main:1 midst:1 n2:1 dyadic:1 positively:1 roc:5 creature:1 ny:1 third:3 rk:5 friendship:3 specific:4 covariate:1 hub:6 r2:5 dk:8 workshop:1 essential:1 false:3 adding:1 ci:2 airoldi:2 distinguishable:1 expressed:1 ordered:4 unexpected:1 amsterdam:1 partially:1 holland:1 monotonic:1 springer:1 satisfies:1 acm:2 conditional:13 goal:2 identity:1 king:1 viewed:1 room:1 man:1 hard:1 beattie:1 total:1 called:1 e:1 support:2 people:1 mark:2 mcmc:2 d1:4 |
2,530 | 3,295 | Discriminative Batch Mode Active Learning
Yuhong Guo and Dale Schuurmans
Department of Computing Science
University of Alberta
{yuhong, dale}@cs.ualberta.ca
Abstract
Active learning sequentially selects unlabeled instances to label with the goal of
reducing the effort needed to learn a good classifier. Most previous studies in active learning have focused on selecting one unlabeled instance to label at a time
while retraining in each iteration. Recently a few batch mode active learning
approaches have been proposed that select a set of most informative unlabeled
instances in each iteration under the guidance of heuristic scores. In this paper,
we propose a discriminative batch mode active learning approach that formulates
the instance selection task as a continuous optimization problem over auxiliary
instance selection variables. The optimization is formulated to maximize the discriminative classification performance of the target classifier, while also taking
the unlabeled data into account. Although the objective is not convex, we can
manipulate a quasi-Newton method to obtain a good local solution. Our empirical
studies on UCI datasets show that the proposed active learning is more effective
than current state-of-the art batch mode active learning algorithms.
1
Introduction
Learning a good classifier requires a sufficient number of labeled training instances. In many circumstances, unlabeled instances are easy to obtain, while labeling is expensive or time consuming.
For example, it is easy to download a large number of webpages, however, it typically requires manual effort to produce classification labels for these pages. Randomly selecting unlabeled instances
for labeling is inefficient in many situations, since non-informative or redundant instances might be
selected. Hence, active learning (i.e., selective sampling) methods have been adopted to control the
labeling process in many areas of machine learning, with the goal of reducing the overall labeling
effort.
Given a large pool of unlabeled instances, active learning provides a way to iteratively select the
most informative unlabeled instances?the queries?to label. This is the typical setting of poolbased active learning. Most active learning approaches, however, have focused on selecting only one
unlabeled instance at one time, while retraining the classifier on each iteration. When the training
process is hard or time consuming, this repeated retraining is inefficient. Furthermore, if a parallel
labeling system is available, a single instance selection system can make wasteful use of the resource. Thus, a batch mode active learning strategy that selects multiple instances each time is more
appropriate under these circumstances. Note that simply using a single instance selection strategy to
select more than one unlabeled instance in each iteration does not work well, since it fails to take the
information overlap between the multiple instances into account. Principles for batch mode active
learning need to be developed to address the multi-instance selection specifically. In fact, a few
batch mode active learning approaches have been proposed recently [2, 8, 9, 17, 19]. However, most
extend existing single instance selection strategies into multi-instance selection simply by using a
heuristic score or greedy procedure to ensure both the instance diversity and informativeness.
In this paper, we propose a new discriminative batch mode active learning strategy that exploits
information from an unlabeled set to attempt to learn a good classifier directly. We define a good
classifier to be one that obtains high likelihood on the labeled training instances and low uncertainty
on labels of the unlabeled instances. We therefore formulate the instance selection problem as an
optimization problem with respect to auxiliary instance selection variables, taking a combination
of discriminative classification performance and label uncertainty as the objective function. Unfortunately, this optimization problem is NP-hard, thus seeking the optimal solution is intractable.
However, we can approximate it locally using a second order Taylor expansion and obtain a suboptimal solution using a quasi-Newton local optimization technique.
The instance selection variables we introduce can be interpreted as indicating self-supervised, optimistic guesses for the labels of the selected unlabeled instances. A concern about the instance
selection process, therefore, is that some information in the unlabeled data that is inconsistent with
the true classification partition might mislead instance selection. Fortunately, the active learning
method can immediately tell whether it has been misled, by comparing the true labels with its optimized guesses. Therefore, one can then adjust the active selection strategy to avoid such over-fitting
in the next iteration, whenever a mismatch between the labeled and unlabeled data has been detected.
An empirical study on UCI datasets shows that the proposed batch mode active learning method is
more effective than some current state-of-the-art batch mode active learning algorithms.
2
Related Work
Many researchers have addressed the active learning problem in a variety of ways. Most have
focused on selecting a single most informative unlabeled instance to label at a time. Many such
approaches therefore make myopic decisions based solely on the current learned classifier, and select
the unlabeled instance for which there is the greatest uncertainty. [10] chooses the unlabeled instance
with conditional probability closest to 0.5 as the most uncertain instance. [5] takes the instance on
which a committee of classifiers disagree the most. [3, 18] suggest choosing the instance closest
to the classification boundary, where [18] analyzes this active learning strategy as a version space
reduction process. Approaches that exploit unlabeled data to provide complementary information
for active learning have also been proposed. [4, 20] exploit unlabeled data by using the prior density
p(x) as uncertainty weights. [16] selects the instance that optimizes the expected generalization error
over the unlabeled data. [11] uses an EM approach to integrate information from unlabeled data. [13,
22] consider combining active learning with semi-supervised learning. [14] presents a mathematical
model that explicitly combines clustering and active learning. [7] presents a discriminative approach
that implicitly exploits the clustering information contained in the unlabeled data by considering
optimistic labelings.
Since single instance selection strategies require tedious retraining with each instance labeled (and,
moreover, since they cannot take advantage of parallel labeling systems), many batch mode active
learning methods have recently been proposed. [2, 17, 19] extend single instance selection strategies
that use support vector machines. [2] takes the diversity of the selected instances into account, in
addition to individual informativeness. [19] proposes a representative sampling approach that selects
the cluster centers of the instances lying within the margin of a support vector machine. [8, 9]
choose multiple instances that efficiently reduce the Fisher information. Overall, these approaches
use a variety of heuristics to guide the instance selection process, where the selected batch should
be informative about the classification model while being diverse enough so that their information
overlap is minimized.
Instead of using heuristic measures, in this paper, we formulate batch mode active learning as an
optimization problem that aims to learn a good classifier directly. Our optimization selects the best
set of unlabeled instances and their labels to produce a classifier that attains maximum likelihood
on labels of the labeled instances while attaining minimum uncertainty on labels of the unlabeled
instances. It is intractable to conduct an exhaustive search for the optimal solution; our optimization
problem is NP-hard. Nevertheless we can exploit a second-order Taylor approximation and use
a quasi-Newton optimization method to quickly reach a local solution. Our proposed approach
provides an example of exploiting optimization techniques in batch model active learning research,
much like other areas of machine learning where optimization techniques have been widely applied
[1].
3
Logistic Regression
In this paper, we use binary logistic regression as the base classification algorithm. Logistic regression is a well-known and mature statistical model for probabilistic classification that has been
actively studied and applied in machine learning. Given a test instance x, binary logistic regression
models the conditional probability of the class label y ? {+1, ?1} by
p(y|x, w) =
1
1 + exp(?yw> x)
where w is the model parameter. Here the bias term is omitted for simplicity of notation. The model
parameters can be trained by maximizing the likelihood of the labeled training data, i.e., minimizing
the logloss of the training instances
X
?
min
log(1 + exp(?yi w> xi )) + w> w
(1)
w
2
i?L
where L indexes the training instances, and ?2 w> w is a regularization term introduced to avoid
over-fitting problems. Logistic regression is a robust classifier that can be trained efficiently using
various convex optimization techniques [12]. Although it is a linear classifier, it is easy to obtain
nonlinear classifications by simply introducing kernels [21].
4
Discriminative Batch Mode Active Learning
For active learning, one typically encounters a small number of labeled instances and a large number
of unlabeled instances. Instance selection strategies based only on the labeled data therefore ignore
potentially useful information embodied in the unlabeled instances. In this section, we present
a new discriminative batch mode active learning algorithm for binary classification that exploits
information in the unlabeled instances. The proposed approach is discriminative in the sense that
(1) it selects a batch of instances by optimizing a discriminative classification model; and (2) it
selects instances by considering the best discriminative configuration of their labels leading to the
best classifier. Unlike other batch mode active learning methods, which identify the most informative
batch of instances using heuristic measures, our approach aims to identify the batch of instances that
directly optimizes classification performance.
4.1
Optimization Problem
An optimal active learning strategy selects a set of instances to label that leads to learning the best
classifier. We assume the learner selects a set of a fixed size m, which is chosen as a parameter. Supervised learning methods typically maximize the likelihood of training instances. With unlabeled
data being available, semi-supervised learning methods have been proposed that train by simultaneously maximizing the likelihood of labeled instances and minimizing the uncertainty of the labels
for unlabeled instances [6]. That is, to achieve a classifier with better generalization performance,
one can maximizing the expected log likelihood of the labeled data and minimize the entropy of the
missing labels on the unlabeled data, according to
X
X X
log P (yi |xi , w) + ?
P (y|xj , w) log P (y|xj , w)
(2)
j?U y=?1
i?L
where ? is a tradeoff parameter used to adjust the relative influence of the labeled and unlabeled data,
w specifies the conditional model, L indexes the labeled instances, and U indexes the unlabeled
instances.
The new active learning approach we propose is motivated by this semi-supervised learning principle. We propose to select a batch of m unlabeled instances, S, to label in each iteration from the
total unlabeled set U , with the goal of maximizing the objective (2). Specifically, we define the
score function for a set of selected instances S in iteration t + 1 as follows
X
X
f (S) =
log P (yi |xi , wt+1 ) ? ?
H(y|xj , wt+1 )
(3)
i?Lt ?S
j?U t \S
where wt+1 is the parameter set for the conditional classification model trained on the new labeled set Lt+1 = Lt ? S, and H(y|xj , wt+1 ) denotes the entropy of the conditional distribution
P (y|xj , wt+1 ), such that
X
H(y|xj , wt+1 ) = ?
P (y|xj , wt+1 ) log P (y|xj , wt+1 )
y=?1
The proposed active learning strategy is to select the batch of instances that has the highest score.
In practice, however it is problematic to use the f (S) score directly to guide instance selection: the
labels for instances S are not known when the selection is conducted. One typical solution for this
problem is to use the expected f (S) score computed under the current conditional model specified
by wt
X
E[f (S)] =
P (yS |xS , wt )f (S)
yS
However, using P (yS |xS , w ) as weights, this expectation might aggravate any ambiguity that already exists in the current classification model w t , since it has been trained on a very small labeled
set Lt . Instead, we propose an optimistic strategy: use the best f (S) score that the batch of unlabeled instances S can achieve over all possible label configurations. This optimistic scoring function
can be written as
X
X
f (S) = max
log P (yi |xi , wt+1 ) ? ?
H(y|xj , wt+1 )
(4)
t
yS
i?Lt ?S
j?U t \S
Thus the problem becomes how to select a set of instances S that achieves the best optimistic f (S)
score defined in (4). Although this problem can be solved using an exhaustive search on all size
m subsets, S, of the unlabeled set U , it is intractable to do so in practice since the search space is
exponentially large. Explicit heuristic search approaches seeking a local optima do not exist either,
since it is hard to define an efficient set of operators that can transfer from one position to another
one within the search space while guaranteeing improvements to the optimistic score.
Instead, in this paper we propose to approach the problem by formulating optimistic batch mode
active learning as an explicit mathematical optimization. Given the labeled set L t and unlabeled set
U t after iteration t, the task in iteration t + 1 is to select a size m subset S from U t that achieves the
best score defined in (4). To do so, we first introduce a set of {0, 1}-valued instance selection variables ?. In particular, ? is a |U t | ? 2 sized indicator matrix, where each row vector ?j corresponds
to the two possible labels {+1, ?1} of the jth instance in U t . Then the optimistic instance selection
for iteration t + 1 can be formulated as the following optimization problem
X
X
X
max
log P (yi |xi , wt+1 ) + ?
vjt+1 ?>
(1 ? ?j e)H(y|xj , wt+1 ) (5)
j ??
?
t
t
t
i?L
s.t.
j?U
|U t |?2
? ? {0, 1}
??E = m
?j e ? 1, ?j
1
+ me>
1> ? ?
2
j?U
(6)
(7)
(8)
(9)
where vjt+1 is a row vector [log P (y = 1|xj , wt+1 ), log P (y = ?1|xj , wt+1 )]; e is a 2-entry
column vector with all 1s; 1 is a |U t |-entry column vector with all 1s; E is a U t ? 2 sized matrix
with all 1s; ? is matrix inner product; is a user-provided parameter that controls class balance
during instance selection; and ? is a parameter that we will use later to adjust our belief in the
guessed labels. Note that, the selection variables ? not only choose instances from U t , but also
select labels for the selected instances. Solving this optimization yields the optimal ? for instance
selection in iteration t + 1.
The optimization problem (5) is an integer programming problem that produces equivalent results
to using exhaustive search to optimize (4), except that we have additional class balance constraints
(9). Integer programming is an NP-hard problem. Thus, the first step toward solving this problem
in practice is to relax it into a continuous optimization by replacing the integer constraints (6) with
continuous constraints 0 ? ? ? 1, yielding the relaxed formulation
X
X
X
log P (yi |xi , wt+1 ) + ?
max
vjt+1 ?>
(1 ? ?j e)H(y|xj , wt+1 ) (10)
j ??
?
t
t
t
i?L
j?U
j?U
0 ? ? ? 1
(11)
??E = m
(12)
?j e ? 1, ?j
(13)
1
+ me>
(14)
1> ? ?
2
If we can solve this continuous optimization problem, a greedy strategy can then be used to recover
the integer solution by iteratively setting the largest non-integer ? value to 1 with respect to the
constraints. However, this relaxed optimization problem is still very complex: the objective function
(10) is not a concave function of ?.1 Nevertheless, standard continuous optimization techniques can
be used to solve for a local maxima.
s.t.
4.2
Quasi-Newton Method
To derive a local optimization technique, consider the objective function (10) as a function of the
instance selection variables ?
X
X
X
f (?) =
log P (yi |xi , wt+1 ) + ?
vjt+1 ?>
(1 ? ?j e)H(y|xj , wt+1 )
(15)
j ??
i?Lt
j?U t
j?U t
As noted, this function is non-concave, therefore convenient convex optimization techniques that
achieve global optimal solutions cannot be applied. Nevertheless, a local optimization approach
exploiting quasi-Newton methods can quickly determine a local optimal solution ? ? . Such a local
optimization approach iteratively updates ? to improve the objective (15), and stops when a local
maximum is reached. At each iteration, it makes a local move that allows it to achieve the largest
improvement in the objective function along the direction decided by cumulative information ob? (k) is the starting point for iteration k. We
tained from the sequence of local gradients. Suppose ?
?
? (k)
first derive a second-order Taylor approximation f (?) for the objective function f (?) at ?
1
? (k) ) + ?fk> vec(? ? ?
? (k) ) + vec(? ? ?
? (k) )> Hk vec(? ? ?
? (k) ) (16)
f?(?) = f (?
2
? (k) ) and
where vec(?) is a function that transforms a matrix into a column vector, and ?f k = ?f (?
? (k) , respectively. Since our
Hk denote the gradient vector and Hessian matrix of f (?) at point ?
original optimization function f (?) is smooth, the quadratic function f?(?) can reasonably approx? (k) . Thus we can determine our update direction by solving
imate it in a small neighborhood of ?
a quadratic programming with the objective (16) and linear constraints (11), (12), (13) and (14).
? ?(k) . Then a reasonable update direction
Suppose the optimal solution for this quadratic program is ?
?
? (k) ? ?
? (k) can be obtained for iteration k. Given this direction, a backtrack line search can be
dk = ?
used to guarantee improvement over the original objective (15). Note that for each different value of
?, wt+1 has to be retrained on Lt ? S to evaluate the new objective value, since S is determined by
?. In order to reduce the computational cost, we approximate the training of w t+1 in our empirical
study, by limiting it to a few Newton-steps with a starting point given by w t trained only on Lt .
? (k) ) and the Hessian matrix Hk . We
The remaining issue is to compute the local gradient ?f (?
? Thus the local gradient can be apassume wt+1 remains constant with small local updates on ?.
proximated as
? j(k) ) = ? vjt+1 + ? [H(y|xj , wt+1 ), H(y|xj , wt+1 )]
?f (?
? (k) ) can be constructed from the individual ?f (?
? j(k) ). We then use BFGS
and therefore ?f (?
(Broyden-Fletcher-Goldfarb-Shanno) to compute the Hessian matrix, which starts as an identity
matrix for the first iteration, and is updated in each iteration as follows [15]
Hk s k s > Hk
yk y >
Hk+1 = Hk ? > k
+ >k
s k Hk s k
yk s k
1
Note that wt+1 is the classification model parameter set trained on Lt+1 = Lt ? S, where S indexes the
unlabeled instances selected by ?. Therefore w t+1 is a function of ?.
? (k+1) ? ?
? (k) . This Hessian matrix accumulates information
where yk = ?fk+1 ? ?fk , and sk = ?
from the sequences of local gradients to help determine better update directions.
4.3
Adjustment Strategy
In the discriminative optimization problem formulated in Section 4.1, the ? variables are used to
optimistically select both instances and their labels, with the goal of achieving the best classification
model according to the objective (5). However, when the labeled set is small and the discriminative
partition (clustering) information contained in the large unlabeled set is inconsistent with the true
classification, the labels optimistically guessed for the selected instances through ? might not match
the underlying true labels. When this occurs, the instance selected will not be very useful for identifying the true classification model. Furthermore, the unlabeled data might continue to mislead the
next instance selection iteration.
Fortunately, we can immediately identify when the process has been misled once the true labels for
the selected instances have been obtained. If the true labels are different from the labels guessed by
the optimization, we need to make an adjustment for the next instance selection iteration. We have
tried a few adjustment strategies in our study, but report the most effective one in this paper. Note
that the being-misled problem
P is caused by the unlabeled data, which affects the target classification
model through the term ? j?U t vjt+1 ?>
j . Therefore, a simple way to fix the problem is to adjust
the parameter ?. Specifically, at the end of each iteration t, we obtain the true labels y S for the
? S indicated by ?? . If they are
selected instances S, and compare them with our guessed labels y
consistent, we will set ? = 1, which means we trust the partition information from the unlabeled
data as same as the label information in the labeled data for building the classification model. If
? S , apparently we should reduce the ? value, that is, reducing the influence of the unlabeled
yS 6= y
data for the next selection iteration t + 1. We use a simple heuristic procedure to determine the ?
value in this case. Starting from ? = 1, we then multiplicatively reduce its value by a small factor,
0.5, until a better objective value for (15) can be obtained when replacing the guessed indicator
variables ?? with the true label indicators. Note that, if we reduce ? to zero, our optimization
problem will be exactly equivalent to picking the most uncertain instance (when m = 1).
5
Experiments
To investigate the empirical performance of the proposed discriminative batch mode active learning
algorithm (Discriminative), we conducted a set of experiments on nine two-class UCI datasets, comparing with a baseline random instance selection algorithm (Random), a non-batch myopic active
learning method that selects the most uncertain instance each time (MostUncertain), and two batch
mode active learning methods proposed in the literature: svmD, an approach that incorporates diversity in active learning with SVM [2]; and Fisher, an approach that uses Fisher information matrix for
instance selection [9]. The UCI datasets we used include (we show the name, followed by the number of instances and the number of attributes): Australian(690;14), Cleve(303;13), Corral(128;6),
Crx(690;15), Flare(1066;10), Glass2(163;9), Heart(270;13), Hepatitis(155;20) and Vote(435;15).
We consider a hard case of active learning, where only a few labeled instances are given at the
start. In each experiment, we start with four randomly selected labeled instances, two in each class.
We then randomly select 2/3 of the remaining instances as the unlabeled set, using the remaining
instances for testing. All the algorithms start with the same initial labeled set, unlabeled set and
testing set. For a fixed batch size m, each algorithm repeatedly select m instances to label each time.
In this section, we report the experimental results with m = 5, averaged over 20 times repetitions.
Figure 1 shows the comparison results on the nine UCI datasets. These results suggest that although
the baseline random sampling method, Random, works surprisingly well in our experiments, the
proposed algorithm, Discriminative, always performs better or at least achieves a comparable performance. Moreover, Discriminative also apparently outperforms the other two batch mode algorithms, svmD and Fisher, on five datasets?Australian, Cleve, Flare, Heart and Hepatitis, and reaches
a tie on two datasets?Crx and Vote. The myopic most uncertain selection method, MostUncertain,
shows an overall inferior performance to Discriminative on Australian, Cleve, Crx, Heart and Hepatitis, and achieves a tie on Flare and Vote. However, Discriminative demonstrates weak perfor-
corral
cleve
0.8
0.8
0.75
0.75
0.7
0.65
0.9
Accuracy
0.85
Accuracy
Accuracy
australian
0.85
0.7
0.85
0.8
0.65
Random
MostUncertain
svmD
Fisher
Discriminative
0.6
0.55
0
20
40
60
80
Random
MostUncertain
svmD
Fisher
Discriminative
0.6
0.55
0
100
20
40
60
80
Number of Labeled Instances
Number of Labeled Instances
crx
flare
Random
MostUncertain
svmD
Fisher
Discriminative
0.75
0.7
0
100
10
20
30
40
50
60
70
80
Number of Labeled Instances
glass2
0.85
0.75
0.8
0.8
0.7
0.75
0.7
0.7
Accuracy
Accuracy
Accuracy
0.75
0.65
0.6
0.65
0.6
0.65
0.55
Random
MostUncertain
svmD
Fisher
Discriminative
0.6
0.55
0
20
40
60
80
Random
MostUncertain
svmD
Fisher
Discriminative
0.5
0.45
0
100
20
40
60
80
Random
MostUncertain
svmD
Fisher
Discriminative
0.55
0.5
0
100
20
40
60
80
Number of Labeled Instances
Number of Labeled Instances
Number of Labeled Instances
heart
hepatitis
vote
100
0.85
0.75
0.8
0.7
0.9
Accuracy
0.8
Accuracy
Accuracy
0.85
0.75
0.7
0.65
Random
MostUncertain
svmD
Fisher
Discriminative
0.6
0.55
0
20
40
60
80
Number of Labeled Instances
100
0.85
Random
MostUncertain
svmD
Fisher
Discriminative
0.65
0.6
0
5
10
15
20
25
30
35
40
Random
MostUncertain
svmD
Fisher
Discriminative
45
Number of Labeled Instances
0.8
0
20
40
60
80
100
Number of Labeled Instances
Figure 1: Results on UCI Datasets
mance on two datasets?Corral and Glass2, where the evaluation lines for most algorithms in the
figures are strangely very bumpy. The reason behind this remains to be investigated.
These empirical results suggest that selecting unlabeled instances through optimizing the classification model directly would obtain more relevant and informative instances, comparing with using
heuristic scores to guide the selection. Although the original optimization problem formulated is
NP-hard, a relaxed local optimization method that leads to a local optimal solution still works effectively.
6
Conclusion
In this paper, we proposed a discriminative batch mode active learning approach that exploits information in unlabeled data and selects a batch of instances by optimizing the target classification
model. Although the proposed technique could be overly optimistic about the information presented
by the unlabeled set, and consequently be misled, this problem can be identified immediately after
obtaining the true labels. A simple adjustment strategy can then be used to rectify the problem in the
following iteration. Experimental results on UCI datasets show that this approach is generally more
effective comparing with other batch mode active learning methods, a random sampling method,
and a myopic non-batch mode active learning method. Our current work is focused on 2-class classification problems, however, it is easy to be extended to multiclass classification problems.
References
[1] K. Bennett and E. Parrado-Hernandez. The interplay of optimization and machine learning
research. Journal of Machine Learning Research, 7, 2006.
[2] K. Brinker. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th International Conference on Machine learning, 2003.
[3] C. Campbell, N. Cristianini, and A. Smola. Query learning with large margin classifiers. In
Proceedings of the 17th International Conference on Machine Learning, 2000.
[4] D. Cohn, Z. Ghahramani, and M. Jordan. Active learning with statistical models. Journal of
Artificial Intelligence Research, 4, 1996.
[5] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by
committee algorithm. Machine Learning, 28, 1997.
[6] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advances
in Neural Information Processing Systems, 2005.
[7] Y. Guo and R. Greiner. Optimistic active learning using mutual information. In Proceedings
of the International Joint Conference on Artificial Intelligence, 2007.
[8] S. Hoi, R. Jin, and M. Lyu. Large-scale text categorization by batch mode active learning. In
Proceedings of the International World Wide Web Conference, 2006.
[9] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Batch mode active learning and its application to medical image classification. In Proceedings of the 23rd International Conference on Machine
Learning, 2006.
[10] D. Lewis and W. Gale. A sequential algorithm for training text classifiers. In Proceedings of the
International ACM-SIGIR Conference on Research and Development in Information Retrieval,
1994.
[11] A. McCallum and K. Nigam. Employing EM in pool-based active learning for text classification. In Proceedings of the 15th International Conference on Machine Learning, 1998.
[12] T. Minka. A comparison of numerical optimizers for logistic regression. Technical report,
2003. http://research.microsoft.com/ minka/papers/logreg/.
[13] I. Muslea, S. Minton, and C. Knoblock. Active + semi-supervised learning = robust multi-view
learning. In Proceedings of the 19th International Conference on Machine Learning, 2002.
[14] H. Nguyen and A. Smeulders. Active learning using pre-clustering. In Proceedings of the 21st
International Conference on Machine Learning, 2004.
[15] J. Nocedal and S.J. Wright. Numerical Optimization. Springer, New York, 1999.
[16] N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of
error reduction. In Proceedings of the 18th International Conference on Machine Learning,
2001.
[17] G. Schohn and D. Cohn. Less is more: Active learning with support vector machines. In
Proceedings of the 17th International Conference on Machine Learning, 2000.
[18] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. In Proceedings of the 17th International Conference on Machine Learning, 2000.
[19] Z. Xu, K. Yu, V. Tresp, X. Xu, and J. Wang. Representative sampling for text classification using support vector machines. In Proceedings of the 25th European Conference on Information
Retrieval Research, 2003.
[20] C. Zhang and T. Chen. An active learning framework for content-based information retrieval.
IEEE Trans on Multimedia, 4:260?258, 2002.
[21] J. Zhu and T. Hastie. Kernel logistic regression and the import vector machine. Journal of
Computational and Graphical Statistics, 14, 2005.
[22] X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learning and semi-supervised learning using gaussian fields and harmonic functions. In ICML Workshop on The Continuum from
Labeled to Unlabeled Data in Machine Learning and Data Mining, 2003.
| 3295 |@word version:1 retraining:4 tedious:1 corral:3 tried:1 reduction:2 initial:1 configuration:2 score:11 selecting:5 crx:4 outperforms:1 existing:1 current:6 comparing:4 com:1 written:1 import:1 numerical:2 partition:3 informative:7 update:5 greedy:2 selected:12 guess:2 intelligence:2 flare:4 mccallum:2 provides:2 zhang:1 five:1 mathematical:2 along:1 constructed:1 fitting:2 combine:1 introduce:2 expected:3 multi:3 muslea:1 alberta:1 considering:2 becomes:1 provided:1 moreover:2 notation:1 underlying:1 interpreted:1 developed:1 guarantee:1 concave:2 tie:2 exactly:1 classifier:17 demonstrates:1 control:2 medical:1 local:18 accumulates:1 solely:1 optimistically:2 hernandez:1 might:5 studied:1 averaged:1 decided:1 testing:2 practice:3 optimizers:1 procedure:2 area:2 empirical:5 convenient:1 pre:1 suggest:3 cannot:2 unlabeled:50 selection:32 operator:1 influence:2 optimize:1 equivalent:2 missing:1 center:1 maximizing:4 starting:3 convex:3 focused:4 formulate:2 mislead:2 simplicity:1 identifying:1 immediately:3 sigir:1 limiting:1 updated:1 target:3 suppose:2 shamir:1 ualberta:1 user:1 programming:3 us:2 roy:1 expensive:1 labeled:30 poolbased:1 solved:1 wang:1 highest:1 yk:3 seung:1 cristianini:1 trained:6 solving:3 learner:1 logreg:1 joint:1 various:1 train:1 effective:4 query:3 detected:1 labeling:6 tell:1 artificial:2 choosing:1 neighborhood:1 exhaustive:3 heuristic:8 widely:1 valued:1 solve:2 relax:1 statistic:1 schohn:1 interplay:1 advantage:1 sequence:2 propose:6 product:1 uci:7 combining:2 relevant:1 achieve:4 webpage:1 exploiting:2 cluster:1 optimum:1 produce:3 categorization:1 guaranteeing:1 help:1 derive:2 auxiliary:2 c:1 australian:4 direction:5 attribute:1 hoi:2 require:1 fix:1 generalization:2 lying:1 wright:1 exp:2 fletcher:1 lyu:2 achieves:4 continuum:1 omitted:1 estimation:1 label:35 largest:2 repetition:1 minimization:1 always:1 gaussian:1 aim:2 tained:1 avoid:2 minton:1 improvement:3 likelihood:6 hepatitis:4 hk:8 attains:1 baseline:2 sense:1 brinker:1 typically:3 koller:1 quasi:5 selective:2 selects:11 labelings:1 overall:3 classification:28 issue:1 proposes:1 development:1 art:2 mutual:1 field:1 once:1 sampling:7 svmd:11 yu:1 icml:1 minimized:1 np:4 report:3 few:5 randomly:3 simultaneously:1 individual:2 microsoft:1 attempt:1 glass2:3 investigate:1 mining:1 evaluation:1 adjust:4 yielding:1 behind:1 myopic:4 logloss:1 conduct:1 taylor:3 guidance:1 uncertain:4 instance:109 column:3 formulates:1 cost:1 introducing:1 subset:2 entry:2 conducted:2 tishby:1 chooses:1 st:1 density:1 shanno:1 international:12 probabilistic:1 pool:2 picking:1 quickly:2 ambiguity:1 bumpy:1 choose:2 gale:1 inefficient:2 leading:1 actively:1 account:3 diversity:4 attaining:1 bfgs:1 explicitly:1 caused:1 later:1 view:1 optimistic:10 apparently:2 reached:1 start:4 recover:1 parallel:2 minimize:1 smeulders:1 accuracy:9 efficiently:2 guessed:5 identify:3 yield:1 weak:1 backtrack:1 researcher:1 reach:2 manual:1 whenever:1 minka:2 stop:1 campbell:1 supervised:8 formulation:1 furthermore:2 smola:1 until:1 web:1 replacing:2 cohn:2 trust:1 nonlinear:1 mode:24 logistic:7 indicated:1 name:1 building:1 true:10 hence:1 regularization:1 iteratively:3 goldfarb:1 during:1 self:1 inferior:1 noted:1 performs:1 image:1 harmonic:1 recently:3 exponentially:1 extend:2 vec:4 broyden:1 approx:1 rd:1 fk:3 knoblock:1 rectify:1 base:1 closest:2 optimizing:3 optimizes:2 binary:3 continue:1 yi:7 scoring:1 analyzes:1 fortunately:2 minimum:1 additional:1 relaxed:3 determine:4 maximize:2 redundant:1 semi:6 multiple:3 smooth:1 technical:1 match:1 retrieval:3 manipulate:1 y:5 regression:7 circumstance:2 expectation:1 iteration:21 kernel:2 addition:1 addressed:1 unlike:1 mature:1 inconsistent:2 incorporates:1 lafferty:1 jordan:1 integer:5 bengio:1 easy:4 enough:1 variety:2 xj:16 affect:1 hastie:1 identified:1 suboptimal:1 reduce:5 inner:1 tradeoff:1 multiclass:1 whether:1 motivated:1 effort:3 hessian:4 nine:2 york:1 repeatedly:1 useful:2 generally:1 yw:1 transforms:1 locally:1 http:1 specifies:1 exist:1 problematic:1 overly:1 diverse:1 four:1 nevertheless:3 achieving:1 wasteful:1 nocedal:1 uncertainty:6 reasonable:1 decision:1 ob:1 comparable:1 cleve:4 followed:1 quadratic:3 constraint:5 min:1 formulating:1 strangely:1 department:1 according:2 combination:1 em:2 heart:4 resource:1 vjt:6 imate:1 remains:2 committee:2 needed:1 end:1 adopted:1 available:2 mance:1 appropriate:1 batch:35 encounter:1 original:3 denotes:1 clustering:4 ensure:1 remaining:3 include:1 graphical:1 newton:6 exploit:7 ghahramani:2 seeking:2 objective:13 move:1 already:1 occurs:1 strategy:16 gradient:5 me:2 toward:2 reason:1 index:4 multiplicatively:1 minimizing:2 balance:2 unfortunately:1 potentially:1 disagree:1 datasets:10 jin:2 situation:1 extended:1 retrained:1 download:1 introduced:1 specified:1 optimized:1 learned:1 trans:1 address:1 mismatch:1 program:1 max:3 belief:1 greatest:1 overlap:2 indicator:3 misled:4 zhu:3 improve:1 embodied:1 tresp:1 text:5 prior:1 literature:1 relative:1 freund:1 integrate:1 sufficient:1 consistent:1 informativeness:2 principle:2 grandvalet:1 row:2 surprisingly:1 jth:1 guide:3 bias:1 wide:1 taking:2 boundary:1 world:1 cumulative:1 dale:2 nguyen:1 employing:1 approximate:2 obtains:1 ignore:1 implicitly:1 global:1 active:56 sequentially:1 consuming:2 discriminative:29 xi:7 continuous:5 search:7 sk:1 parrado:1 learn:3 transfer:1 robust:2 ca:1 reasonably:1 obtaining:1 nigam:1 schuurmans:1 expansion:1 investigated:1 complex:1 european:1 repeated:1 complementary:1 xu:2 representative:2 tong:1 fails:1 position:1 explicit:2 yuhong:2 x:2 dk:1 svm:1 concern:1 intractable:3 exists:1 incorporating:1 workshop:1 sequential:1 effectively:1 margin:2 chen:1 entropy:3 lt:10 simply:3 greiner:1 contained:2 adjustment:4 springer:1 corresponds:1 lewis:1 acm:1 conditional:6 goal:4 formulated:4 sized:2 identity:1 consequently:1 fisher:13 bennett:1 hard:7 content:1 typical:2 specifically:3 reducing:3 except:1 wt:25 determined:1 total:1 multimedia:1 experimental:2 vote:4 indicating:1 select:12 perfor:1 support:6 guo:2 evaluate:1 |
2,531 | 3,296 | Variational inference for Markov jump processes
Guido Sanguinetti
Department of Computer Science
University of Sheffield, U.K.
guido@dcs.shef.ac.uk
Manfred Opper
Department of Computer Science
Technische Universit?at Berlin
D-10587 Berlin, Germany
opperm@cs.tu-berlin.de
Abstract
Markov jump processes play an important role in a large number of application
domains. However, realistic systems are analytically intractable and they have traditionally been analysed using simulation based techniques, which do not provide
a framework for statistical inference. We propose a mean field approximation to
perform posterior inference and parameter estimation. The approximation allows
a practical solution to the inference problem, while still retaining a good degree of
accuracy. We illustrate our approach on two biologically motivated systems.
Introduction
Markov jump processes (MJPs) underpin our understanding of many important systems in science
and technology. They provide a rigorous probabilistic framework to model the joint dynamics of
groups (species) of interacting individuals, with applications ranging from information packets in
a telecommunications network to epidemiology and population levels in the environment. These
processes are usually non-linear and highly coupled, giving rise to non-trivial steady states (often
referred to as emerging properties). Unfortunately, this also means that exact statistical inference is
unfeasible and approximations must be made in the analysis of these systems.
A traditional approach, which has been very successful throughout the past century, is to ignore
the discrete nature of the processes and to approximate the stochastic process with a deterministic
process whose behaviour is described by a system of non-linear, coupled ODEs. This approximation
relies on the stochastic fluctuations being negligible compared to the average population counts.
There are many important situations where this assumption is untenable: for example, stochastic
fluctuations are reputed to be responsible for a number of important biological phenomena, from
cell differentiation to pathogen virulence [1]. Researchers are now able to obtain accurate estimates
of the number of macromolecules of a certain species within a cell [2, 3], prompting a need for
practical statistical tools to handle discrete data.
Sampling approaches have been extensively used to simulate the behaviour of MJPs. Gillespie?s
algorithm and its generalisations [4, 5] form the basis of many simulators used in systems biology
studies. The simulations can be viewed as individual samples taken from a completely specified
MJP, and can be very useful to reveal possible steady states. However, it is not clear how observed
data can be incorporated in a principled way, which renders this approach of limited use for posterior
inference and parameter estimation. A Markov chain Monte Carlo (MCMC) approach to incorporate observations has been recently proposed by Boys et al. [6]. While this approach holds a lot of
promise, it is computationally very intensive. Despite several simplifying approximations, the correlations between samples mean that several millions of MCMC iterations are needed even in simple
examples. In this paper we present an alternative, deterministic approach to posterior inference and
parameter estimation in MJPs. We extend the mean-field (MF) variational approach ([cf. e.g. 7])
to approximate a probability distribution over an (infinite dimensional) space of discrete paths, representing the time-evolving state of the system. In this way, we replace the couplings between the
1
different species by their average, mean-field (MF) effect. The result is an iterative algorithm that
allows parameter estimation and prediction with reasonable accuracy and very contained computational costs.
The rest of this paper is organised as follows: in sections 1 and 2 we review the theory of Markov
jump processes and introduce our general strategy to obtain a MF approximation. In section 3
we introduce the Lotka-Volterra model which we use as an example to describe how our approach
works. In section 4 we present experimental results on simulated data from the Lotka-Volterra model
and from a simple gene regulatory network. Finally, we discuss the relationship of our study to other
stochastic models, as well as further extensions and developments of our approach.
1
Markov jump processes
We start off by establishing some notation and basic definitions. A D-dimensional discrete stochastic process is a family of D-dimensional discrete random variables x(t) indexed by the continuous
time t. In our examples, the values taken by x(t) will be restricted to the non-negative integers
ND
0 . The dimensionality D represents the number of (molecular) species present in the system; the
components of the vector x (t) then represent the number of individuals of each species present at
time t. Furthermore, the stochastic processes we will consider will always be Markovian, i.e. given
any sequence of observations for the state of the system (xt1 , . . . , xtN ), the conditional probability
of the state of the system at a subsequent time xtN +1 depends only on the last of the previous observations. A discrete stochastic process which exhibits the Markov property is called a Markov jump
process (MJP).
A MJP is characterised by its process rates f (x0 |x), defined ?x0 6= x; in an infinitesimal time
interval ?t, the quantity f (x0 |x) ?t represents the infinitesimal probability that the system will make
a transition from state x at time t to state x0 at time t + ?t. Explicitly,
p (x0 |x) ' ?x0 x + ?tf (x0 |x)
(1)
where ?x0 x is the Kronecker delta and the equation
P becomes exact in the limit ?t ? 0. Equation
(1) implies by normalisation that f (x|x) = ? x0 6=x f (x0 |x). The interpretation of the process
rates as infinitesimal transition probabilities highlights the simple relationship between the marginal
distribution pt (x) and the process rates. The probability of finding the system in state x at time
t + ?t will be given by the probability that the system was already in state x at time t, minus the
probability that the system was in state x at time t and jumped to state x0 , plus the probability that
00
the system was in a different state x at time t and then jumped to state x. In formulae, this is given
by
?
?
X
X
pt+?t (x) = pt (x) ?1 ?
f (x0 |x) ?t? +
pt (x0 ) f (x|x0 ) ?t.
x0 6=x
x0 6=x
Taking the limit for ?t ? 0 we obtain the (forward) Master equation for the marginal probabilities
X
dpt (x)
=
[?pt (x) f (x0 |x) + pt (x0 ) f (x|x0 )] .
dt
0
(2)
x 6=x
2
Variational approximate inference
Let us assume that we have noisy observations yl l = 1, . . . , N of the state of the system at a discrete number of time points; the noise model is specified by a likelihood function p? (y l |x (tl )). We
can combine this likelihood with the prior process to obtain a posterior process. As the observations
happen at discrete time points, the posterior process is clearly still a Markov jump process. Given
the Markovian nature of the processes, one could hope to obtain the posterior rate functions g(x 0 |x)
by a forward-backward procedure similar to the one used for Hidden Markov Models. While this
is possible in principle, the computations would require simultaneously solving a very large system
of coupled linear ODEs (the number of equations is of order S D , S being the number of states
accessible to the system), which is not feasible even in simple systems.
2
In the following, we will use the variational mean field (MF) approach to approximate the posterior
process by a factorizing process, minimising the Kullback - Leibler (KL) divergence between processes. The inference process is then reduced to the solution of D one - dimensional Master and
backward equations of size S. This is still nontrivial because the KL divergence requires the joint
probabilities of variables x(t) at infinitely many different times t, i.e. probabilities over entire paths
of a process rather than the simpler marginals pt (x). We will circumvent this problem by working
with time discretised trajectories and then passing on to the continuum time limit. We denote such
a trajectory as x0:K = (x (t0 ) , . . . , x (t0 + K?t)) where ?t is a small time interval and K is very
large. Hence, we write the joint posterior probability as
N
ppost (x0:K ) =
Y
1
p? (yl |x (tl ))
pprior (x0:K ) ?
Z
with pprior (x0:K ) = p(x0 )
K?1
Y
p(xk+1 |xk )
k=0
l=1
with Z = p(y1 , . . . , yN ). Note that x (tl ) ? x0:K . In the rest of this section, we will show how to
compute the posterior rates and marginals by minimising the KL divergence. We notice in passing
that a similar framework for continuous stochastic processes was proposed recently in [8].
2.1
KL divergence between MJPs
The KL divergence between two MJPs defined by their path probabilities p(x 0:K ) and q(x0:K ) is
KL [q, p] =
X
q(x0:K ) ln
K?1
XX
X
q(x0:K )
q(xk+1 |xk )
=
q(xk )
q(xk+1 |xk ) ln
+ K0
p(x0:K )
p(xk+1 |xk )
x
x
k=0
x0:K
k
k+1
will be set to zero in the following. We can now use equation
and where K0 = x0 q (x0 ) log
(1) for the conditional probabilities; letting ?t ? 0 and simultaneously K ? ? so that K?t ? T ,
we obtain
Z T
X
X
g(x0 |x)
0
0
KL [q, p] =
dt
qt (x)
g(x0 |x) ln
+
f
(x
|x)
?
g(x
|x)
(3)
f (x0 |x)
0
0
0
x
q(x0 )
p(x0 )
P
x :x 6=x
where f (x |x) and g(x |x) are the rates of the p and q process respectively. Notice that we have
swapped from the path probabilities framework to an expression that depends solely on the process
rates and marginals.
0
2.2
0
MF approximation to posterior MJPs
We will now consider the case where p is a posterior MJP and q is an approximating process. The
prior process will be denoted as pprior and its rates will be denoted by f . The KL divergence then is
KL(q, ppost ) = ln Z + KL(q, pprior ) ?
N
X
Eq [ln p? (yl |x (tl ))] .
l=1
To obtain a tractable inference problem, we will assume that, in the approximating process q, the
joint path probability for all the species factorises into the product of path probabilities for individual species. This gives the following equations for the species probabilities and transition rates
qt (x) =
D
Y
gt (x0 |x) =
qit (xi )
D Y
X
(4)
?x0j ,xj git (x0i |xi ) .
i=1 j6=i
i=1
Notice that we have emphasised that the process rates for the approximating process may depend
explicitly on time, even if the process rates of the original process do not. Exploiting these assumptions, we obtain that the KL divergence between the approximating process and the posterior process
is given by
KL [q, ppost ] = ln Z ?
N
X
Eq [ln p? (yl |x (tl ))] +
l=1
Z
T
dt
0
XX
i
x
qit (x)
X
x0 :x0 6=x
(
git (x0 |x)
git (x0 |x) ln
+ f?i (x0 |x) ? git (x0 |x)
f?i (x0 |x)
3
)
(5)
where we have defined
f?i (x0 |x) = exp Ex\i [ln fi x0 |x : x0j = xj , ?j 6= i ]
f?i (x0 |x) = Ex\i [fi x0 |x : x0 = xj , ?j 6= i ]
(6)
j
and Ex\i [. . .] denotes an expectation over all components of x except xi (using the measure q). In
order to find the MF approximation to the posterior process we must optimise the KL divergence (5)
with respect to the marginals qit (x) and the rates git (x0 |x). These, however, are not independent
but fulfill the Master equation (2).
We will take care of this constraint by using a Lagrange multiplier function ?i (x, t) and compute
the stationary values of the Lagrangian
L =KL (q, ppost )
?
?
XZ T X
X
?
dt
?i (x, t) ??t qit (x) ?
{git (x|x0 ) qit (x0 ) ? git (x0 |x) qit (x)}? .
i
0
x
(7)
x0 6=x
We can now compute functional derivatives of (7) to obtain
"
#
0
X
?L
g
(x
|x)
it
git (x0 |x) ln
=
? git (x0 |x) + f?i (x0 |x) + ?t ?i (x, t) +
?i (x0 |x)
?qit (x)
f
0
x 6=x
X
X
git (x0 |x) {?i (x0 , t) ? ?i (x, t)} ?
ln p? (yl |x (t)) ? (t ? tl ) = 0
x0
(8)
l
git (x0 |x)
?L
=
q
(x)
ln
+ ?i (x0 , t) ? ?i (x, t)
it
?git (x0 |x)
f?i (x0 |x)
!
=0
(9)
Defining ri (x, t) = e??i (x,t) and inserting (9) into (8), we arrive at the linear differential equation
X
dri (x, t)
f?i (x0 |x) ri (x, t) ? f?i (x0 |x) ri (x0 , t)
=
dt
0
(10)
x 6=x
valid for all times outside of the observations. To include the observations,
Q we assume for simplicity
that the noise model factorises across the species, so that p? (yl |x(t)) = i p?i (yil |xi (tl )) ?l. Then
equation (8) yields
lim? ri (x, t) = p?i (yil |xi (tl )) lim+ ri (x, t) .
t?tl
t?tl
We can then optimise the Lagrangian (7) using an iterative strategy. Starting with an initial guess for
qt (x) and selecting a species i, we can compute f?i (x0 |x) and f?i (x0 |x). Using these, we can solve
equation (10) backwards starting from the condition ri (x, T ) = 1?x (i.e., the constraint becomes
void at the end of the time under consideration). This allows us to update our estimate of the rates
git (x0 |x) using equation (9), which can then be used to solve the master equation (2) and update our
guess of qit (x). This procedure can be followed sequentially for all the species; as each step leads
to a decrease in the value of the Lagrangian, this guarantees that the algorithm will converge to a
(local) minimum.
2.3
Parameter estimation
Since KL [q, ppost ] ? 0, we obtain as useful by-product of the MF approximation a tractable variational lower bound on the log - likelihood of the data log Z = log p(y1 , . . . , yN ) from (5). As usual
[e.g 7] such a bound can be used in order to optimise model parameters using a variational E-M
algorithm.
4
3
Example: the Lotka-Volterra process
The Lotka-Volterra (LV) process is often used as perhaps the simplest non-trivial MJP [6, 4]. Introduced independently by Alfred J. Lotka in 1925 and Vito Volterra in 1926, it describe the dynamics
of a population composed of two interacting species, traditionally referred to as predator and prey.
The process rates for the LV system are given by
fprey (x + 1|x, y) = ?x
fpredator (y + 1|x, y) = ?xy
fprey (x ? 1|x, y) = ?xy
fprey (y ? 1|x, y) = ?y
(11)
where x is the number of preys and y is the number of predators. All other rates are zero: individuals
can only be created or destroyed one at the time. Rate sparsity is a characteristic of very many
processes, including all chemical kinetic processes (indeed, the LV model can be interpreted as a
chemical kinetic model). An immediate difficulty in implementing our strategy is that some of the
process rates are identically zero when one of the species is extinct (i.e. its numbers have reached
zero); this will lead to infinities when computing the expectation of the logarithm of the rates in
equation (6). To avoid this, we will ?regularise? the process by adding a small constant to the f (1|0);
it can be proved that on average over the data generating process the variational approximation to
the regularised process still optimises a bound analogous to (3) on the original process [9].
The variational estimates for the parameters of the LV process are obtained by inserting the process
rates (11) into the MF bound and taking derivatives w.r.t. the parameters. Setting them to zero, we
obtain a set of fixed point equations
RT
RT
hgpreyt (x + 1|x)ipreyt
hgpreyt (x ? 1|x)ipreyt
0
0
?=
,
? = RT
,
RT
dt hxipreyt
dt hxipreyt hyipredatort
0
0
RT
RT
hgpredatort (y + 1|y)ipredatort
hgpredatort (y ? 1|y)ipredatort
0
0
,
?=
. (12)
?=
RT
RT
dt hyipredatort
dt hyipredatort hxipreyt
0
0
Equations (12) have an appealing intuitive meaning in terms of the physics of the process: for
example, ? is given by the average total increase rate of the approximating process divided by the
average total number of preys.
We generated 15 counts of predator and prey numbers at regular intervals from a LV process with
parameters ? = 5 ? 10?4 , ? = 1 ? 10?4 , ? = 5 ? 10?4 and ? = 1 ? 10?4 , starting from initial
population levels of seven predators and nineteen preys. These counts were then corrupted according
to the following noise model
1
?6
p?i (yil |xi (tl )) ? |y ?x (t )| + 10
,
(13)
2 il i l
where xi (tl ) is the (discrete) count for species i at time tl before the addition of noise. Notice that,
since population numbers are constrained to be positive, the noise model is not symmetric. The
original count is placed at the mode, rather than the mean, of the noise model. This asymmetry is
unavoidable when dealing with quantities that are constrained positive.
While in theory each species can have an arbitrarily large number of individuals, in order to solve the
differential equations (2) and (10) we have to truncate the process. While the truncation threshold
could be viewed as another parameter and optimised variationally, in these experiments we took a
more heuristic approach and limited the maximum number of individuals of each species to 200.
This was justified by considering that an exponential growth pattern fitted to the available data led to
an estimate of approximately 90 individuals in the most abundant species, well below the truncation
threshold.
The results of the inference are shown in Figure 1. The solid line is the mean of the approximating
distribution, the dashed lines are the 90% confidence intervals, the dotted line is the true path from
which the data was obtained. The diamonds are the noisy observations. The parameter values
inferred are reasonably close to the real parameter values: ? = 1.35 ? 10?3 , ? = 2.32 ? 10?4 ,
5
25
25
y
x
20
20
15
15
10
10
5
5
0
0
500
1000
1500
2000
2500
0
0
3000
t
500
1000
1500
2000
2500
3000
t
(a)
(b)
Figure 1: MF approximation to posterior LV process: (a) predator population and (b) prey population. Diamonds are the (noisy) observed data points, solid line the mean, dashed lines 90%
confidence intervals, dotted lines the true path from which the data was sampled.
? = 1.57 ? 10?3 and ? = 1.78 ? 10?4 . While the process is well approximated in the area where
data is present, the free-form prediction is less good, especially for the predator population. This
might be due to the inaccuracies in the estimates of the parameters. The approximate posterior
displays nontrivial emerging properties: for example, we predict that there is a 10% chance that the
prey population will become extinct at the end of the period of interest. These results were obtained
in approximately fifteen minutes on an Intel Pentium M 1.7GHz laptop computer.
To check the reliability of our inference results and the rate with which the estimated parameter
values converge to the true values, we repeated our experiments for 5, 10, 15 and 20 available data
points. For each sample size, we drew five independent samples from the same LV process. Figure
2(a) shows the average and standard deviation of the mean squared error (MSE) in the estimate of
the parameters as a function of the number of observations N ; as expected, this decreases uniformly
with the sample size.
4
Example: gene autoregulatory network
As a second example we consider a gene autoregulatory network. This simple network motif is
one of the most important building blocks of the transcriptional regulatory networks found in cells
because of its ability to increase robustness in the face of fluctuation in external signals [10]. Because
of this, it is one of the best studied systems, both at the experimental and at the modelling level
[11, 3]. The system consists again of two species, mRNA and protein; the process rates are given by
fRN A (x + 1|x, y) = ? (1 ? 0.99 ? ? (y ? yc ))
fRN A (x ? 1|x, y) = ?x
fp (y + 1|x, y) = ?x
fp (y ? 1|x, y) = ?y
(14)
where ? is the Heavyside step function, y the protein number and x the mRNA number. The intuitive
meaning of these equations is simple: both protein and mRNA decay exponentially. Proteins are
produced through translation of mRNA with a rate proportional to the mRNA abundance. On the
other hand, mRNA production depends on protein concentration levels through a logical function:
as soon as protein numbers increase beyond a certain critical parameter yc , mRNA production drops
dramatically by a factor 100.
The optimisation of the variational bound w.r.t. the parameters ?, ?, ? and ? is straightforward and
yields fixed point equations similar to the ones for the LV process. The dependence of the MF bound
on the critical parameter yc is less straightforward and is given by
( Z
"
#Z
)
Z
T
T
1 T
Lyc = const + 2
dt?
g h (yc ) + log 1 ? 0.99
h (yc ) dt
dt?
g
(15)
T 0
0
0
P
where g? = hgRN A (x + 1|x)iqRN A and h (yc ) = y?yc qp (y). A plot of this function obtained
during the inference task below can be seen in Figure 2(b). We can determine the minimum of (15)
by searching over the possible (discrete) values of yc .
6
?4
2
x 10
2.5
L
M SE
2
1.5
1.5
1
0.5
1
0
0.5
4
6
8
10
12
14
16
18
20
?0.5
0
22
20
40
60
80
100
120
140
160
180
200
yc
(a)
(b)
Figure 2: (a) Mean squared error (MSE) in the estimate of the parameters as a function of the
number of observations N for the LV process. (b) Negative variational likelihood bound for the
gene autoregulatory network as a function of the critical parameter yc .
N
y
30
18
28
x 17
16
26
15
24
14
22
13
20
12
18
11
16
14
0
10
500
1000
1500
9
0
2000
t
500
1000
1500
2000
t
(a)
(b)
Figure 3: MF approximation to posterior autoregulatory network process: (a) protein population and
(b) mRNA population. Diamonds are the (noisy) observed data points, solid line the mean, dashed
lines 90% confidence intervals, dotted lines the true path from which the data was taken.
Again, we generated data by simulating the process with parameter values y c = 20, ? = 2 ? 10?3 ,
? = 6 ? 10?5 , ? = 5 ? 10?4 and ? = 7 ? 10?5 . Fifteen counts were generated for both mRNA
and proteins, with initial count of 17 protein and 12 mRNA molecules. These were then corrupted
with noise generated from the distribution shown in equation (13). The results of the approximate
posterior inference are shown in Figure 3. The inferred parameter values are in good agreement
with the true values: yc = 19, ? = 2.20 ? 10?3 , ? = 1.84 ? 10?5 , ? = 4.01 ? 10?4 and
? = 1.54 ? 10?4 . Interestingly, if the data is such that the protein count never exceeds the critical
parameter yc , this becomes unidentifiable (the likelihood bound is optimised by yc = ? or yc = 0),
as may be expected. The likelihood bound loses its sharp optimum evident from Figure 2(b) (results
not shown).
5
Discussion
In this contribution we have shown how a MF approximation can be used to perform posterior inference in MJPs from discretely observed noisy data. The MF approximation has been shown to
perform well and to retain much of the richness of these complex systems. The proposed approach
is conceptually very different from existing MCMC approaches [6]. While these focus on sampling
from the distribution of reactions happening in a small interval in time, we compute an approximation to the probability distribution over possible paths of the system. This allows us to easily
factorise across species; by contrast, sampling the number of reactions happening in a certain time
7
interval is difficult, and not amenable to simple techniques such as Gibbs sampling. While it is
possible that future developments will lead to more efficient sampling strategies, our approach outstrips current MCMC based methods in terms of computational efficiency, A further strength of our
approach is the ease with which it can be scaled to more complex systems involving larger numbers
of species. The factorisation assumption implies that the computational complexity grows linearly
in the number of species D; it is unclear how MCMC would scale to larger systems.
An alternative suggestion, proposed in [11], was somehow to seek a middle way between a MJP
and a deterministic, ODE based approach by approximating the MJP with a continuous stochastic
process, i.e. by using a diffusion approximation. While these authors show that this approximation
works reasonably well for inference purposes, it is worth pointing out that the population sizes
in their experimental results were approximately one order of magnitude larger than in ours. It
is arguable that a diffusion approximation might be suitable for population sizes as low as a few
hundreds, but it cannot be expected to be reasonable for population sizes of the order of 10.
The availability of a practical tool for statistical inference in MJPs opens a number of important
possible developments for modelling. It would be of interest, for example, to develop mixed models where one species with low counts interacts with another species with high counts that can be
modelled using a deterministic or diffusion approximation. This situation would be of particular importance for biological applications, where different proteins can have very different copy numbers
in a cell but still be equally important. Another interesting extension is the possibility of introducing
a spatial dimension which influences how likely interactions are. Such an extension would be very
important, for example, in an epidemiological study. All of these extensions rely centrally on the
possibility of estimating posterior probabilities, and we expect that the availability of a practical tool
for the inference task will be very useful to facilitate this.
References
[1] Harley H. McAdams and Adam Arkin. Stochastic mechanisms in gene expression. Proceedings of the National Academy of Sciences USA, 94:814?819, 1997.
[2] Long Cai, Nir Friedman, and X. Sunney Xie. Stochastic protein expression in individual cells
at the single molecule level. Nature, 440:580?586, 2006.
[3] Yoshito Masamizu, Toshiyuki Ohtsuka, Yoshiki Takashima, Hiroki Nagahara, Yoshiko Takenaka, Kenichi Yoshikawa, Hitoshi Okamura, and Ryoichiro Kageyama. Real-time imaging of
the somite segmentation clock: revelation of unstable oscillators in the individual presomitic
mesoderm cells. Proceedings of the National Academy of Sciences USA, 103:1313?1318,
2006.
[4] Daniel T. Gillespie. Exact stochastic simulation of coupled chemical reactions. Journal of
Physical Chemistry, 81(25):2340?2361, 1977.
[5] Eric Mjolsness and Guy Yosiphon. Stochastic process semantics for dynamical grammars. to
appear in Annals of Mathematics and Artificial Intelligence, 2006.
[6] Richard J. Boys, Darren J. Wilkinson, and Thomas B. L. Kirkwood.
Bayesian
inference for a discretely observed stochastic kinetic model.
available from
http://www.staff.ncl.ac.uk/d.j.wilkinson/pub.html, 2004.
[7] Manfred Opper and David Saad (editors). Advanced Mean Field Methods. MIT press, Cambridge,MA, 2001.
[8] Cedric Archambeau, Dan Cornford, Manfred Opper, and John Shawe-Taylor. Gaussian process
approximations of stochastic differential equations. Journal of Machine Learning Research
Workshop and Conference Proceedings, 1(1):1?16, 2007.
[9] Manfred Opper and David Haussler. Bounds for predictive errors in the statistical mechanics
of supervised learning. Physical Review Letters, 75:3772?3775, 1995.
[10] Uri Alon. An introduction to systems biology. Chapman and Hall, London, 2006.
[11] Andrew Golightly and Darren J. Wilkinson. Bayesian inference for stochastic kinetic models
using a diffusion approximation. Biometrics, 61(3):781?788, 2005.
8
| 3296 |@word middle:1 nd:1 mjp:7 open:1 simulation:3 seek:1 git:13 simplifying:1 fifteen:2 minus:1 solid:3 initial:3 selecting:1 pub:1 daniel:1 ours:1 interestingly:1 past:1 existing:1 reaction:3 current:1 analysed:1 must:2 john:1 realistic:1 subsequent:1 happen:1 drop:1 plot:1 update:2 stationary:1 intelligence:1 guess:2 xk:9 manfred:4 simpler:1 five:1 yoshikawa:1 differential:3 become:1 consists:1 combine:1 dan:1 introduce:2 x0:72 expected:3 indeed:1 xz:1 mechanic:1 simulator:1 considering:1 becomes:3 xx:2 notation:1 estimating:1 laptop:1 interpreted:1 emerging:2 finding:1 differentiation:1 guarantee:1 growth:1 universit:1 scaled:1 revelation:1 uk:2 yn:2 appear:1 before:1 negligible:1 positive:2 local:1 limit:3 despite:1 establishing:1 optimised:2 fluctuation:3 path:10 solely:1 approximately:3 might:2 plus:1 studied:1 ease:1 limited:2 archambeau:1 practical:4 responsible:1 block:1 epidemiological:1 procedure:2 area:1 evolving:1 confidence:3 regular:1 protein:12 unfeasible:1 close:1 cannot:1 influence:1 www:1 deterministic:4 lagrangian:3 mrna:10 straightforward:2 starting:3 independently:1 simplicity:1 factorisation:1 haussler:1 population:14 century:1 handle:1 traditionally:2 searching:1 analogous:1 annals:1 pt:7 play:1 guido:2 exact:3 regularised:1 agreement:1 arkin:1 approximated:1 observed:5 role:1 frn:2 cornford:1 richness:1 mjolsness:1 decrease:2 principled:1 environment:1 complexity:1 wilkinson:3 dynamic:2 vito:1 depend:1 solving:1 predictive:1 efficiency:1 eric:1 basis:1 completely:1 easily:1 joint:4 k0:2 describe:2 london:1 monte:1 artificial:1 outside:1 whose:1 heuristic:1 larger:3 solve:3 ppost:5 grammar:1 ability:1 noisy:5 mcadams:1 sequence:1 cai:1 took:1 propose:1 interaction:1 product:2 tu:1 inserting:2 opperm:1 academy:2 intuitive:2 exploiting:1 asymmetry:1 optimum:1 generating:1 adam:1 illustrate:1 andrew:1 ac:2 coupling:1 lyc:1 develop:1 x0i:1 qt:3 alon:1 eq:2 c:1 implies:2 stochastic:16 packet:1 implementing:1 require:1 behaviour:2 biological:2 extension:4 hold:1 hall:1 exp:1 predict:1 pointing:1 jumped:2 continuum:1 purpose:1 estimation:5 tf:1 tool:3 hope:1 mit:1 clearly:1 always:1 gaussian:1 rather:2 fulfill:1 avoid:1 focus:1 modelling:2 likelihood:6 check:1 pentium:1 rigorous:1 contrast:1 inference:20 motif:1 virulence:1 entire:1 hidden:1 germany:1 semantics:1 html:1 denoted:2 retaining:1 development:3 constrained:2 spatial:1 marginal:2 field:5 optimises:1 never:1 sampling:5 chapman:1 biology:2 represents:2 future:1 richard:1 few:1 composed:1 simultaneously:2 divergence:8 national:2 individual:10 macromolecule:1 harley:1 factorise:1 friedman:1 normalisation:1 interest:2 highly:1 possibility:2 chain:1 pprior:4 accurate:1 amenable:1 xy:2 biometrics:1 indexed:1 taylor:1 logarithm:1 abundant:1 fitted:1 markovian:2 mesoderm:1 cost:1 introducing:1 technische:1 deviation:1 hundred:1 successful:1 corrupted:2 epidemiology:1 accessible:1 retain:1 probabilistic:1 physic:1 off:1 yl:6 outstrips:1 squared:2 again:2 unavoidable:1 guy:1 external:1 prompting:1 derivative:2 de:1 chemistry:1 availability:2 explicitly:2 depends:3 lot:1 reached:1 start:1 predator:6 contribution:1 il:1 accuracy:2 characteristic:1 yield:2 conceptually:1 toshiyuki:1 modelled:1 bayesian:2 produced:1 carlo:1 trajectory:2 worth:1 researcher:1 xtn:2 j6:1 definition:1 infinitesimal:3 sampled:1 proved:1 logical:1 lim:2 dimensionality:1 segmentation:1 variationally:1 dt:12 xie:1 supervised:1 unidentifiable:1 furthermore:1 correlation:1 clock:1 working:1 hand:1 yoshiki:1 somehow:1 mode:1 reveal:1 perhaps:1 grows:1 building:1 effect:1 facilitate:1 usa:2 multiplier:1 true:5 analytically:1 hence:1 chemical:3 symmetric:1 leibler:1 during:1 steady:2 evident:1 ranging:1 variational:10 consideration:1 meaning:2 recently:2 fi:2 functional:1 qp:1 yil:3 physical:2 exponentially:1 million:1 extend:1 interpretation:1 marginals:4 cambridge:1 gibbs:1 mathematics:1 shawe:1 reliability:1 gt:1 posterior:19 certain:3 arbitrarily:1 seen:1 minimum:2 care:1 staff:1 converge:2 determine:1 period:1 dashed:3 signal:1 exceeds:1 minimising:2 long:1 divided:1 molecular:1 equally:1 prediction:2 involving:1 basic:1 sheffield:1 optimisation:1 expectation:2 iteration:1 represent:1 cell:6 justified:1 addition:1 shef:1 ode:3 interval:8 void:1 swapped:1 rest:2 saad:1 dri:1 nineteen:1 integer:1 backwards:1 identically:1 destroyed:1 xj:3 intensive:1 t0:2 motivated:1 expression:3 render:1 passing:2 dramatically:1 useful:3 clear:1 se:1 extensively:1 simplest:1 reduced:1 http:1 arguable:1 notice:4 dotted:3 delta:1 estimated:1 alfred:1 discrete:10 write:1 promise:1 group:1 lotka:5 threshold:2 prey:7 diffusion:4 backward:2 hiroki:1 imaging:1 letter:1 master:4 telecommunication:1 arrive:1 throughout:1 reasonable:2 family:1 x0j:2 bound:10 followed:1 centrally:1 display:1 discretely:2 nontrivial:2 strength:1 untenable:1 kronecker:1 constraint:2 infinity:1 ri:6 simulate:1 extinct:2 department:2 according:1 truncate:1 kenichi:1 across:2 appealing:1 biologically:1 restricted:1 taken:3 computationally:1 equation:21 ln:12 discus:1 count:10 mechanism:1 needed:1 letting:1 tractable:2 end:2 available:3 simulating:1 alternative:2 robustness:1 dpt:1 original:3 denotes:1 thomas:1 cf:1 include:1 qit:8 const:1 emphasised:1 giving:1 especially:1 approximating:7 already:1 quantity:2 volterra:5 strategy:4 concentration:1 rt:8 usual:1 traditional:1 dependence:1 transcriptional:1 exhibit:1 unclear:1 interacts:1 berlin:3 simulated:1 seven:1 unstable:1 trivial:2 relationship:2 ncl:1 difficult:1 unfortunately:1 boy:2 negative:2 rise:1 underpin:1 perform:3 diamond:3 observation:10 markov:10 immediate:1 situation:2 defining:1 incorporated:1 dc:1 interacting:2 y1:2 sharp:1 inferred:2 introduced:1 david:2 discretised:1 specified:2 kl:15 inaccuracy:1 able:1 beyond:1 usually:1 pattern:1 below:2 mjps:8 yc:14 fp:2 sparsity:1 dynamical:1 optimise:3 including:1 gillespie:2 critical:4 suitable:1 difficulty:1 rely:1 circumvent:1 advanced:1 representing:1 golightly:1 technology:1 factorises:2 created:1 coupled:4 regularise:1 nir:1 review:2 understanding:1 prior:2 cedric:1 heavyside:1 expect:1 highlight:1 mixed:1 suggestion:1 interesting:1 organised:1 proportional:1 lv:9 degree:1 principle:1 editor:1 translation:1 production:2 placed:1 last:1 truncation:2 free:1 soon:1 copy:1 taking:2 face:1 ghz:1 kirkwood:1 opper:4 dimension:1 transition:3 valid:1 forward:2 made:1 jump:7 author:1 approximate:6 ignore:1 kullback:1 gene:5 dealing:1 sequentially:1 xt1:1 xi:7 sanguinetti:1 factorizing:1 continuous:3 iterative:2 regulatory:2 nature:3 reasonably:2 molecule:2 mse:2 complex:2 domain:1 linearly:1 noise:7 repeated:1 referred:2 intel:1 tl:13 exponential:1 abundance:1 formula:1 minute:1 decay:1 intractable:1 workshop:1 adding:1 drew:1 importance:1 pathogen:1 magnitude:1 uri:1 mf:13 led:1 likely:1 infinitely:1 happening:2 lagrange:1 contained:1 loses:1 chance:1 relies:1 darren:2 kinetic:4 ma:1 conditional:2 viewed:2 oscillator:1 replace:1 feasible:1 generalisation:1 infinite:1 characterised:1 except:1 uniformly:1 called:1 specie:23 total:2 experimental:3 incorporate:1 mcmc:5 phenomenon:1 ex:3 |
2,532 | 3,297 | Receding Horizon
Differential Dynamic Programming
Yuval Tassa ?
Tom Erez & Bill Smart ?
Abstract
The control of high-dimensional, continuous, non-linear dynamical systems is a
key problem in reinforcement learning and control. Local, trajectory-based methods, using techniques such as Differential Dynamic Programming (DDP), are not
directly subject to the curse of dimensionality, but generate only local controllers.
In this paper,we introduce Receding Horizon DDP (RH-DDP), an extension to the
classic DDP algorithm, which allows us to construct stable and robust controllers
based on a library of local-control trajectories. We demonstrate the effectiveness of our approach on a series of high-dimensional problems using a simulated
multi-link swimming robot. These experiments show that our approach effectively
circumvents dimensionality issues, and is capable of dealing with problems of (at
least) 24 state and 9 action dimensions.
1
Introduction
We are interested in learning controllers for high-dimensional, highly non-linear dynamical systems,
continuous in state, action, and time. Local, trajectory-based methods, using techniques such as Differential Dynamic Programming (DDP), are an active field of research in the Reinforcement Learning and Control communities. Local methods do not model the value function or policy over the
entire state space by focusing computational effort along likely trajectories. Featuring algorithmic
complexity polynomial in the dimension, local methods are not directly affected by dimensionality
issues as space-filling methods.
In this paper, we introduce Receding Horizon DDP (RH-DDP), a set of modifications to the classic
DDP algorithm, which allows us to construct stable and robust controllers based on local-control
trajectories in highly non-linear, high-dimensional domains. Our new algorithm is reminiscent of
Model Predictive Control, and enables us to form a time-independent value function approximation
along a trajectory. We aggregate several such trajectories into a library of locally-optimal linear
controllers which we then select from, using a nearest-neighbor rule.
Although we present several algorithmic contributions, a main aspect of this paper is a conceptual
one. Unlike much of recent related work (below), we are not interested in learning to follow a
pre-supplied reference trajectory. We define a reward function which represents a global measure
of performance relative to a high level objective, such as swimming towards a target. Rather than
a reward based on distance from a given desired configuration, a notion which has its roots in the
control community?s definition of the problem, this global reward dispenses with a ?path planning?
component and requires the controller to solve the entire problem.
We demonstrate the utility of our approach by learning controllers for a high-dimensional simulation
of a planar, multi-link swimming robot. The swimmer is a model of an actuated chain of links
in a viscous medium, with two location and velocity coordinate pairs, and an angle and angular
velocity for each link. The controller must determine the applied torque, one action dimension for
?
?
Y. Tassa is with the Hebrew University, Jerusalem, Israel.
T. Erez and W.D. Smart are with the Washington University in St. Louis, MO, USA.
1
each articulated joint. We reward controllers that cause the swimmer to swim to a target, brake on
approach and come to a stop over it.
We synthesize controllers for several swimmers, with state dimensions ranging from 10 to 24 dimensions. The controllers are shown to exhibit complex locomotive behaviour in response to real-time
simulated interaction with a user-controlled target.
1.1
Related work
Optimal control of continuous non-linear dynamical systems is a central research goal of the RL
community. Even when important ingredients such as stochasticity and on-line learning are removed, the exponential dependence of computational complexity on the dimensionality of the domain remains a major computational obstacle. Methods designed to alleviate the curse of dimensionality include adaptive discretizations of the state space [1], and various domain-specific manipulations [2] which reduce the effective dimensionality.
Local trajectory-based methods such as DDP were introduced to the NIPS community in [3], where
a local-global hybrid method is employed. Although DDP is used there, it is considered an aid to the
global approximator, and the local controllers are constant rather than locally-linear. In this decade
DDP was reintroduced by several authors. In [4] the idea of using the second order local DDP
models to make locally-linear controllers is introduced. In [5] DDP was applied to the challenging
high-dimensional domain of autonomous helicopter control, using a reference trajectory. In [6]
a minimax variant of DDP is used to learn a controller for bipedal walking, again by designing
a reference trajectory and rewarding the walker for tracking it. In [7], trajectory-based methods
including DDP are examined as possible models for biological nervous systems. Local methods
have also been used for purely policy-based algorithms [8, 9, 10], without explicit representation of
the value function.
The best known work regarding the swimming domain is that by Ijspeert and colleagues (e.g. [11])
using Central Pattern Generators. While the inherently stable domain of swimming allows for such
open-loop control schemes, articulated complex behaviours such as turning and tracking necessitate
full feedback control which CPGs do not provide.
2
Methods
2.1
Definition of the problem
We consider the discrete-time dynamics xk+1 = F (xk , uk ) with states x ? Rn and actions u ? Rm .
R ?t
In this context we assume F (xk, uk ) = xk + 0 f (x(t), uk )dt for a continuous f and a small ?t,
approximating the continuous problem and identifying with it in the ?t ? 0 limit. Given some
scalar reward function r(x, u) and a fixed initial state x1 (superscripts indicating the time index), we
wish to find the policy which maximizes the total reward1 acquired over a finite temporal horizon:
? ? (xk , k) = argmax[
?(?,?)
N
X
r(xi , ?(xi , i))].
i=k
The quantity maximized on the RHS is the value function, which solves Bellman?s equation:
V (x, k) = max[r(x, u) + V (F (x, u), k+1)].
u
(1)
Each of the functions in the sequence {V (x, k)}N
k=1 describes the optimal reward-to-go of the optimization subproblem from k to N . This is a manifestation of the dynamic programming principle. If
N = ?, essentially eliminating the distinction between different time-steps, the sequence collapses
to a global, time-independent value function V (x).
2.2
DDP
Differential Dynamic Programming [12, 13] is an iterative improvement scheme which finds a
locally-optimal trajectory emanating from a fixed starting point x1 . At every iteration, an approx1
We (arbitrarily) choose to use phrasing in terms of reward-maximization, rather than cost-minimization.
2
imation to the time-dependent value function is constructed along the current trajectory {xk }N
k=1 ,
k N
which is formed by iterative application of F using the current control sequence {u }k=1 . Every
iteration is comprised of two sweeps of the trajectory: a backward and a forward sweep.
In the backward sweep, we proceed backwards in time to generate local models of V in the following
manner. Given quadratic models of V (xk+1 , k + 1), F (xk , uk ) and r(xk , uk ), we can approximate
the unmaximised value function, or Q-function,
Q(xk , uk ) = r(xk , uk ) + V k+1 (F (xk , uk ))
(2)
as a quadratic model around the present state-action pair (xk , uk ):
1
Q(x + ?x, u + ?u) ? Q0 + Qx ?x + Qu ?u + [?xT ?uT ]
2
k
"
#
Qxx
Qux
k
Qxu
Quu
h
?x
?u
i
(3)
Where the coefficients Q?? are computed by equating coefficients of similar powers in the secondorder expansion of (2)
Qx
Qu
= rx + Vxk+1 Fxk
= ru + Vxk+1 Fuk
k+1 k
k
Fx + Vxk+1 Fxx
= rxx + Fxk Vxx
k k+1 k
k+1 k
= ruu + Fu Vxx Fu + Vx Fuu
k+1 k
k
= rxu + Fxk Vxx
Fu + Vxk+1 Fxu
.
Qxx
Quu
Qxu
(4)
Once the local model of Q is obtained, the maximizing ?u is solved for
?u? = argmax[Q(xk + ?x, uk + ?u)] = ?Q?1
uu (Qu + Qux ?x)
(5)
?u
and plugged back into (3) to obtain a quadratic approximation of V k :
V0k = V0k+1 ? Qu (Quu )?1 Qu
(6a)
Vxk
k
Vxx
(6b)
=
=
Qk+1
x
Qk+1
xx
?1
? Qu (Quu )
Qux
?1
? Qxu (Quu )
Qux .
(6c)
This quadratic model can now serve to propagate the approximation to V k?1 . Thus, equations (4),
(5) and (6) iterate in the backward sweep, computing a local model of the Value function along
with a modification to the policy in the form of an open-loop term ?Q?1
uu Qu and a feedback term
?Q?1
uu Qux ?x, essentially solving a local linear-quadratic problem in each step. In some senses, DDP
can be viewed as dual to the Extended Kalman Filter (though employing a higher order expansion
of F ).
In the forward sweep of the DDP iteration, both the open-loop and feedback terms are combined to
create a new control sequence (?
uk )N
xk )N
k=1 which results in a new nominal trajectory (?
k=1 .
x
?1 = x1
k
k
u
? =u
x
?
k+1
(7a)
? Q?1
uu Qu
k
k
?
= F (?
x ,u
? )
Q?1
xk
uu Qux (?
k
?x )
(7b)
(7c)
We note that in practice the inversion in (5) must be conditioned. We use a Levenberg Marquardtlike scheme similar to the ones proposed in [14]. Similarly, the u-update in (7b) is performed with
an adaptive line search scheme similar to the ones described in [15].
2.2.1
Complexity and convergence
The leading complexity term of one iteration of DDP itself, assuming the model of F as required for
(4) is given, is O(N m?1 ) for computing (6) N times, with 2 < ?1 < 3, the complexity-exponent of
inverting Quu . In practice, the greater part of the computational effort is devoted to the measurement
of the dynamical quantities in (4) or in the propagation of collocation vectors as described below.
DDP is a second order algorithm with convergence properties similar to, or better than Newton?s
method performed on the full vectorial uk with an exact N m ? N m Hessian [16]. In practice,
convergence can be expected after 10-100 iterations, with the stopping criterion easily determined
as the size of the policy update plummets near the minimum.
3
2.2.2
Collocation Vectors
We use a new method of obtaining the quadratic model of Q (Eq. (2)), inspired by [17]2 . Instead
of using (4), we fit this quadratic model to samples of the value function at a cloud of collocation
vectors {xki , uki }i=1..p , spanning the neighborhood of every state-action pair along the trajectory.
We can directly measure r(xki , uki ) and F (xki , uki ) for each point in the cloud, and by using the
approximated value function at the next time step, we can estimate the value of (2) at every point:
q(xki , uki ) = r(xki , uki ) + V k+1 (F (xki , uki ))
Then, we can insert the values of q(xki , uki ) and (xki , uki ) on the LHS and RHS of (3) respectively,
and solve this set of p linear equations for the Q?? terms. If p > (3(n + m) + (m + n)2 )/2, and
the cloud is in general configuration, the equations are non-singular and can be easily solved by a
generic linear algebra package.
There are several advantages to using such a scheme. The full nonlinear model of F is used to
construct Q, rather than only a second-order approximation. Fxx , which is an n ? n ? n tensor need
not be stored. The addition of more vectors can allow the modeling of noise, as suggested in [17].
In addition, this method allows us to more easily apply general coordinate transformations in order
to represent V in some internal space, perhaps of lower dimension.
The main drawback of this scheme is the additional complexity of an O(N p?2 ) term for solving the
p-equation linear system. Because we can choose {xki , uki } in way which makes the linear system
sparse, we can enjoy the ?2 < ?1 of sparse methods and, at least for the experiments performed
here, increase the running time only by a small factor.
In the same manner that DDP is dually reminiscent of the Extended Kalman Filter, this method bears
a resemblance to the test vectors propagated in the Unscented Kalman Filter [18], although we use
a quadratic, rather than linear number of collocation vectors.
2.3
Receding Horizon DDP
When seeking to synthesize a global controller from many local controllers, it is essential that the
different local components operate synergistically. In our context this means that local models of
the value function must all model the same function, which is not the case for the standard DDP
solution. The local quadratic models which DDP computes around the trajectory are approximations
to V (x, k), the time-dependent value function. The standard method in RL for creating a global
value function is to use an exponentially discounted horizon. Here we propose a fixed-length nondiscounted Receding Horizon scheme in the spirit of Model Predictive Control [19].
Having computed a DDP solution to some problem starting from many different starting points
x1 , we can discard all the models computed for points xk>1 and save only the ones around the
x1 ?s. Although in this way we could accumulate a time-independent approximation to V (x, N )
only, starting each run of N -step DDP from scratch would be prohibitively expensive. We therefore
propose the following: After obtaining the solution starting from x1 , we save the local model at
k = 1 and proceed to solve a new N -step problem starting at x2 , this time initialized with the
policy obtained on the previous run, shifted by one time-step, and appended with the last control
unew = [u2 , u3 ...uN uN ]. Because this control sequence is very close to the optimal solution, the
second-order convergence of DDP is in full effect and the algorithm converges in 1 or 2 sweeps.
Again saving the model at the first time step, we iterate. We stress the that without the fast and exact
convergence properties of DDP near the maximum, this algorithm would be far less effective.
2.4
Nearest Neighbor control with Trajectory Library
A run of DDP computes a locally quadratic model of V and a locally linear model of u, expressed by
the gain term ?Q?1
uu Qux . This term generalizes the open-loop policy to a tube around the trajectory,
inside of which a basin-of-attraction is formed. Having lost the dependency on the time k with
the receding-horizon scheme, we need some space-based method of determining which local gain
model we select at a given state. The simplest choice, which we use here, is to select the nearest
Euclidian neighbor.
2
Our method is a specific instantiation of a more general algorithm described therein.
4
Outside of the basin-of-attraction of a single trajectory, we can expect the policy to perform very
poorly and lead to numerical divergence if no constraint on the size of u is enforced. A possible
solution to this problem is to fill some volume of the state space with a library of local-control
trajectories [20], and consider all of them when selecting the nearest linear gain model.
3
3.1
Experiments
The swimmer dynamical system
We describe a variation of the d-link swimmer dynamical system [21]. A stick or link of length
?
?
l, lying in a plane at an angle ? to some direction, parallel to ?
t = cos(?)
sin(?) and perpendicular to
? sin(?) ?
n
? = ?cos(?)
, moving with velocity x? in a viscous fluid, is postulated to admit a normal frictional
?
force ?kn l?
n(x ? n
? ) and a tangential frictional force ?kt l?
t(x? ? ?
t), with kn > kt > 0. The swimmer
is modeled as a chain of d such links of lengths li and masses mi , its configuration described by
the generalized coordinates q = ( xcm
? ), of two center-of-mass coordinates and d angles. Letting
x
?i = xi ? xcm be the positions of the link centers WRT the center of mass , the Lagrangian is
X
X
X
2
mi x
?? i + 21
Ii ??i2
L = 21 x? 2cm
mi + 21
i
i
i
1
mi li2 the moments-of-inertia. The relationship between the relative position vectors
with Ii = 12
ti+1 + 21 li?
ti , which
and angles of the links is given by the d ? 1 equations x
?i+1
?i = 21 li+1?
P? x
?i = 0 which comes from the
express the joining of successive links, and by the equation i mi x
(a) Time course of two angular velocities.
(b) State projection.
Figure 1: RH-DDP trajectories. (a) three snapshots of the receding horizon trajectory (dotted)
with the current finite-horizon optimal trajectory (solid) appended, for two state dimensions. (b)
Projections of the same receding-horizon trajectories onto the largest three eigenvectors of the full
state covariance matrix. As described in Section 3.3, the linear regime of the reward, here applied
to a 3-swimmer, compels the RH trajectories to a steady swimming gait ? a limit cycle.
5
definition of the x
?i ?s relative to the center-of-mass. The function
X
X
1 3 ?2
? i )2 + 12
[li (x? i ? n
li ?i ] ? 21 kt
li (x? i ? ?ti )2
F = ? 21 kn
i
i
known as the dissipation function, is that function whose derivatives WRT the q?i ?s provide the postu? from the 2+d Euler-Lagrange equations:
lated frictional forces. With these in place, we can obtain q
?
d
dt ( ?qi L)
=
?
? q?i F
+u
with u being the external forces and torques applied to the system. By applying d ? 1 torques ?j
in action-reaction pairs at the joints ui = ?i ? ?i?1 , the isolated nature of the? dynamical
system
q?
? , and letting x = q? be the 4 + 2dis preserved. Performing the differentiations, solving for q
dimensional state variable, finally gives the dynamics x? = ( qq?? ) = f (x, u).
3.2
Internal coordinates
The two coordinates specifying the position of the center-of-mass and the d angles are defined
relative to an external coordinate system, which the controller should not have access to. We make
a coordinate transformation into internal coordinates, where only the d ? 1 relative angles {??j =
?j+1 ? ?j }d?1
j=1 are given, and the location of the target is given relative to coordinate system fixed
on one of the links. This makes the learning isotropic and independent of a specific location on the
plane. The collocation method allows us to perform this transformation directly on the vector cloud
without having to explicitly differentiate it, as we would have had to using classical DDP. Note also
that this transformation reduces the dimension of the state (one angle less), suggesting the possibility
of further dimensionality reduction.
3.3
The reward function
The reward function we used was
||xnose ||2
r(x, u) = ?cx p
? cu ||u||2
||xnose ||2 + 1
(8)
Where xnose = [x1 x2 ]T is the 2-vector from some designated point on the swimmer?s body to the
target (the origin in internal space), and cx and cu are positive constants. This reward is maximized
when the nose is brought to rest on the target under a quadratic action-cost penalty. It should not be
confused with the desired state reward of classical optimal control since values are specified only
for 2 out of the 2d + 4 coordinates. The functional form of the target-reward term is designed to
be linear in ||xnose || when far from the target and quadratic when close to it (Figure 2(b)). Because
(a) Swimmer
(b) Reward
Figure 2: (a) A 5-swimmer with the ?nose? point at its tip and a ring-shaped
target. (b) The funcp
tional form of the planar reward component r(xnose ) = ?||xnose ||2 / ||xnose ||2 + 1. This form
translates into a steady swimming gait at large distances with a smooth braking and stopping at the
goal.
6
of the differentiation in Eq. (5), the solution is independent of V0 , the constant part of the value.
Therefore, in the linear regime of the reward function, the solution is independent of the distance
from the target, and all the trajectories are quickly compelled to converge to a one-dimensional
manifold in state-space which describes steady-state swimming (Figure 1(b)). Upon nearing the
target, the swimmer must initiate a braking maneuver, and bring the nose to a standstill over the
target. For targets that are near the swimmer, the behaviour must also include various turns and
jerks, quite different from steady-state swimming, which maneuver the nose into contact with the
target. Our experience during interaction with the controller, as detailed below, leads us to believe
that the behavioral variety that would be exhibited by a hypothetical exact optimal controller for this
system to be extremely large.
4
Results
In order to asses the controllers we constructed a real-time interaction package3 . By dragging the
target with a cursor, a user can interact with controlled swimmers of 3 to 10 links with a state dimension varying from 10 to 24, respectively. Even with controllers composed of a single trajectory,
the swimmers perform quite well, turning, tracking and braking on approach to the target.
All of the controllers in the package control swimmers with unit link lengths and unit masses. The
normal-to-tangential drag coefficient ratio was kn /kt = 25. The function F computes a single 4thR t+?t
order Runge-Kutta integration step of the continuous dynamics F (xk , uk ) = xk+ t
f (xk , uk )dt
with ?t = 0.05s . The receding horizon window was of 40 time-steps, or 2 seconds.
When the state doesn?t gravitate to one of the basins of attraction around the trajectories, numerical
divergence can occur. This effect can be initiated by the user by quickly moving the target to a
?surprising? location. Because nonlinear viscosity effects are not modeled and the local controllers
are also linear, exponentially diverging torques and angular velocities can be produced. When adding
as few as 20 additional trajectories, divergence is almost completely avoided.
Another claim which may be made is that there is no guarantee that the solutions obtained, even on
the trajectories, are in fact optimal. Because DDP is a local optimization method, it is bound to stop
in a local minimum. An extension of this claim is that even if the solutions are optimal, this has to
do with the swimmer domain itself, which might be inherently convex in some sense and therefore
an ?easy? problem.
While both divergence and local minima are serious issues, they can both be addressed by appealing
to our panoramic motivation in the biology. Real organisms cannot apply unbounded torque. By
hard-limiting the torque to large but finite values, non-divergence can be guaranteed4 . Similarly,
local minima exist even in the motor behaviour of the most complex organisms, famously evidenced
by Fosbury?s reinvention of the high jump.
Regarding the easiness or difficulty of the swimmer problem ? we made the documented code available and hope that it might serve as a useful benchmark for other algorithms.
5
Conclusions
The significance of this work lies at its outlining of a new kind of tradeoff in nonlinear motor control
design. If biological realism is an accepted design goal, and physical and biological constraints taken
into account, then the expectations we have from our controllers can be more relaxed than those of
the control engineer. The unavoidable eventual failure of any specific biological organism makes
the design of truly robust controllers a futile endeavor, in effect putting more weight on the mode,
rather than the tail of the behavioral distribution. In return for this forfeiture of global guarantees,
we gain very high performance in a small but very dense sub-manifold of the state-space.
3
Available at http://alice.nc.huji.ac.il/?tassa/
We actually constrain angular velocities since limiting torque would require a stiffer integrator, but theoretical non-divergence is fully guaranteed by the viscous dissipation which enforces a Lyapunov function on
the entire system, once torques are limited.
4
7
Since we make use of biologically grounded arguments, we briefly outline the possible implications
of this work to biological nervous systems. It is commonly acknowledged, due both to theoretical
arguments and empirical findings, that some form of dimensionality reduction must be at work in
neural control mechanisms. A common object in models which attempt to describe this reduction
is the motor primitive, a hypothesized atomic motor program which is combined with other such
programs in a small ?alphabet?, to produce complex behaviors in a given context. Our controllers
imply a different reduction: a set of complex prototypical motor programs, each of which is nearoptimal only in a small volume of the state-space, yet in that space describes the entire complexity of
the solution. Giving the simplest building blocks of the model such a high degree of task specificity
or context, would imply a very large number of these motor prototypes in a real nervous system, an
order of magnitude analogous, in our linguistic metaphor, to that of words and concepts.
References
[1] Remi Munos and Andrew W. Moore. Variable Resolution Discretization for High-Accuracy Solutions of
Optimal Control Problems. In International Joint Conference on Artificial Intelligence, pages 1348?1355,
1999.
[2] M. Stilman, C. G. Atkeson, J. J. Kuffner, and G. Zeglin. Dynamic programming in reduced dimensional
spaces: Dynamic planning for robust biped locomotion. In Proceedings of the 2005 IEEE International
Conference on Robotics and Automation (ICRA 2005), pages 2399?2404, 2005.
[3] Christopher G. Atkeson. Using local trajectory optimizers to speed up global optimization in dynamic
programming. In NIPS, pages 663?670, 1993.
[4] C. G. Atkeson and J. Morimoto. Non-parametric representation of a policies and value functions: A
trajectory based approach. In Advances in Neural Information Processing Systems 15, 2003.
[5] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng. An application of reinforcement learning to aerobatic
helicopter flight. In Advances in Neural Information Processing Systems 19, 2007.
[6] J. Morimoto and C. G. Atkeson. Minimax differential dynamic programming: An application to robust
bipedwalking. In Advances in Neural Information Processing Systems 14, 2002.
[7] Emanuel Todorov and Wei-Wei Li. Optimal control methods suitable for biomechanical systems. In 25th
Annual Int. Conf. IEE Engineering in Medicine and Biology Society, 2003.
[8] R. Munos. Policy gradient in continuous time. Journal of Machine Learning Research, 7:771?791, 2006.
[9] J. Peters and S. Schaal. Reinforcement learning for parameterized motor primitives. In Proceedings of
the IEEE International Joint Conference on Neural Networks (IJCNN 2006), 2006.
[10] Tom Erez and William D. Smart. Bipedal walking on rough terrain using manifold control. In IEEE/RSJ
International Conference on Robots and Systems (IROS), 2007.
[11] A. Crespi and A. Ijspeert. AmphiBot II: An amphibious snake robot that crawls and swims using a central
pattern generator. In Proceedings of the 9th International Conference on Climbing and Walking Robots
(CLAWAR 2006), pages 19?27, 2006.
[12] D. Q. Mayne. A second order gradient method for determining optimal trajectories for non-linear discretetime systems. International Journal of Control, 3:85?95, 1966.
[13] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, 1970.
[14] L.-Z. Liao and C. A. Shoemaker. Convergence in unconstrained discrete-time differential dynamic programming. IEEE Transactions on Automatic Control, 36(6):692?706, 1991.
[15] S. Yakowitz. Algorithms and computational techniques in differential dynamic programming. Control
and Dynamic Systems: Advances in Theory and Applications, 31:75?91, 1989.
[16] L.-Z. Liao and C. A. Shoemaker. Advantages of differential dynamic programming over newton?s method
for discrete-time optimal control problems. Technical Report 92-097, Cornell Theory Center, 1992.
[17] E. Todorov.
Iterative local dynamic programming.
Manuscript under review, available at
www.cogsci.ucsd.edu/?todorov/papers/ildp.pdf, 2007.
[18] S. J. Julier and J. K. Uhlmann. A new extension of the kalman filter to nonlinear systems. In Proceedings
of AeroSense: The 11th Int. Symp. on Aerospace/Defence Sensing, Simulation and Controls, 1997.
[19] C. E. Garcia, D. M. Prett, and M. Morari. Model predictive control: theory and practice. Automatica, 25:
335?348, 1989.
[20] M. Stolle and C. G. Atkeson. Policies based on trajectory libraries. In Proceedings of the International
Conference on Robotics and Automation (ICRA 2006), 2006.
[21] R. Coulom. Reinforcement Learning Using Neural Networks, with Applications to Motor Control. PhD
thesis, Institut National Polytechnique de Grenoble, 2002.
8
| 3297 |@word cu:2 briefly:1 eliminating:1 polynomial:1 inversion:1 open:4 simulation:2 propagate:1 covariance:1 locomotive:1 euclidian:1 solid:1 reduction:4 moment:1 configuration:3 series:1 synergistically:1 selecting:1 initial:1 reaction:1 current:3 discretization:1 surprising:1 yet:1 reminiscent:2 must:6 numerical:2 biomechanical:1 enables:1 motor:8 designed:2 update:2 intelligence:1 nervous:3 plane:2 xk:20 isotropic:1 compelled:1 realism:1 location:4 successive:1 unbounded:1 along:5 constructed:2 differential:9 behavioral:2 inside:1 symp:1 manner:2 introduce:2 acquired:1 expected:1 behavior:1 planning:2 multi:2 integrator:1 torque:8 bellman:1 inspired:1 discounted:1 curse:2 window:1 metaphor:1 confused:1 xx:1 maximizes:1 medium:1 mass:6 israel:1 viscous:3 cm:1 kind:1 finding:1 transformation:4 differentiation:2 guarantee:2 temporal:1 every:4 hypothetical:1 ti:3 prohibitively:1 rm:1 lated:1 uk:14 control:33 stick:1 unit:2 enjoy:1 louis:1 maneuver:2 positive:1 engineering:1 local:30 limit:2 joining:1 initiated:1 path:1 might:2 therein:1 equating:1 examined:1 drag:1 specifying:1 challenging:1 alice:1 co:2 collapse:1 limited:1 perpendicular:1 enforces:1 atomic:1 practice:4 lost:1 block:1 optimizers:1 discretizations:1 empirical:1 projection:2 pre:1 word:1 specificity:1 onto:1 close:2 cannot:1 context:4 applying:1 www:1 bill:1 lagrangian:1 center:6 maximizing:1 brake:1 jerusalem:1 go:1 starting:6 primitive:2 convex:1 resolution:1 identifying:1 rule:1 attraction:3 fill:1 classic:2 notion:1 autonomous:1 coordinate:11 fx:1 variation:1 qq:1 target:17 nominal:1 limiting:2 user:3 exact:3 programming:13 analogous:1 designing:1 swimmer:17 secondorder:1 origin:1 velocity:6 synthesize:2 approximated:1 expensive:1 walking:3 locomotion:1 cloud:4 subproblem:1 solved:2 cycle:1 removed:1 complexity:7 ui:1 reward:16 dynamic:18 solving:3 smart:3 algebra:1 predictive:3 purely:1 serve:2 upon:1 completely:1 easily:3 joint:4 various:2 vxx:4 alphabet:1 articulated:2 fast:1 effective:2 describe:2 cogsci:1 emanating:1 artificial:1 aggregate:1 neighborhood:1 outside:1 whose:1 quite:2 solve:3 itself:2 superscript:1 runge:1 plummet:1 sequence:5 advantage:2 differentiate:1 quigley:1 propose:2 gait:2 interaction:3 helicopter:2 loop:4 poorly:1 mayne:2 convergence:6 produce:1 converges:1 ring:1 object:1 andrew:1 ac:1 nearest:4 eq:2 solves:1 qxx:2 come:2 uu:6 lyapunov:1 direction:1 drawback:1 unew:1 filter:4 vx:1 require:1 behaviour:4 abbeel:1 alleviate:1 biological:5 extension:3 insert:1 unscented:1 lying:1 around:5 considered:1 normal:2 algorithmic:2 mo:1 claim:2 major:1 u3:1 uhlmann:1 largest:1 create:1 minimization:1 hope:1 brought:1 rough:1 defence:1 rather:6 cornell:1 varying:1 linguistic:1 schaal:1 improvement:1 dragging:1 panoramic:1 sense:1 elsevier:1 tional:1 dependent:2 stopping:2 entire:4 collocation:5 nondiscounted:1 snake:1 interested:2 issue:3 dual:1 exponent:1 integration:1 field:1 construct:3 once:2 shaped:1 washington:1 having:3 ng:1 biology:2 represents:1 saving:1 filling:1 report:1 serious:1 tangential:2 few:1 grenoble:1 composed:1 divergence:6 national:1 argmax:2 william:1 attempt:1 highly:2 possibility:1 bipedal:2 truly:1 jacobson:1 sens:1 devoted:1 chain:2 implication:1 kt:4 fu:3 capable:1 experience:1 lh:1 institut:1 plugged:1 initialized:1 desired:2 isolated:1 theoretical:2 modeling:1 obstacle:1 compels:1 maximization:1 cost:2 euler:1 comprised:1 gravitate:1 iee:1 stored:1 frictional:3 dependency:1 kn:4 nearoptimal:1 combined:2 st:1 international:7 huji:1 rewarding:1 rxu:1 tip:1 quickly:2 again:2 central:3 tube:1 unavoidable:1 stolle:1 choose:2 thesis:1 necessitate:1 conf:1 admit:1 creating:1 external:2 leading:1 derivative:1 nearing:1 li:7 return:1 suggesting:1 account:1 de:1 automation:2 coefficient:3 int:2 postulated:1 explicitly:1 performed:3 root:1 parallel:1 contribution:1 appended:2 formed:2 ass:1 il:1 accuracy:1 qk:2 morimoto:2 maximized:2 climbing:1 produced:1 trajectory:36 rx:1 definition:3 failure:1 colleague:1 mi:5 propagated:1 stop:2 gain:4 emanuel:1 ut:1 dimensionality:8 actually:1 back:1 focusing:1 manuscript:1 higher:1 dt:3 follow:1 tom:2 planar:2 response:1 wei:2 though:1 angular:4 flight:1 christopher:1 nonlinear:4 propagation:1 mode:1 perhaps:1 resemblance:1 believe:1 usa:1 effect:4 hypothesized:1 building:1 concept:1 q0:1 moore:1 i2:1 sin:2 during:1 levenberg:1 steady:4 criterion:1 manifestation:1 generalized:1 funcp:1 pdf:1 stress:1 outline:1 demonstrate:2 polytechnique:1 dissipation:2 bring:1 qxu:3 ranging:1 common:1 functional:1 rl:2 physical:1 exponentially:2 tassa:3 volume:2 tail:1 organism:3 julier:1 braking:3 accumulate:1 measurement:1 automatic:1 unconstrained:1 similarly:2 erez:3 stochasticity:1 biped:1 had:1 phrasing:1 moving:2 stable:3 robot:5 access:1 stiffer:1 v0:1 recent:1 discard:1 manipulation:1 arbitrarily:1 minimum:4 fuk:1 greater:1 additional:2 relaxed:1 employed:1 determine:1 converge:1 ii:3 full:5 reduces:1 smooth:1 technical:1 controlled:2 qi:1 variant:1 controller:26 essentially:2 expectation:1 liao:2 iteration:5 represent:1 grounded:1 robotics:2 preserved:1 addition:2 addressed:1 walker:1 singular:1 operate:1 unlike:1 rest:1 exhibited:1 subject:1 ruu:1 effectiveness:1 spirit:1 near:3 backwards:1 uki:9 easy:1 iterate:2 jerk:1 fit:1 fxx:2 variety:1 todorov:3 reduce:1 idea:1 regarding:2 prototype:1 tradeoff:1 translates:1 utility:1 effort:2 swim:2 penalty:1 peter:1 proceed:2 cause:1 hessian:1 action:8 useful:1 detailed:1 eigenvectors:1 viscosity:1 locally:6 simplest:2 documented:1 generate:2 http:1 supplied:1 exist:1 reduced:1 coates:1 discretetime:1 shifted:1 dotted:1 li2:1 discrete:3 rxx:1 affected:1 express:1 key:1 putting:1 easiness:1 acknowledged:1 shoemaker:2 iros:1 backward:3 swimming:9 enforced:1 run:3 angle:7 package:2 parameterized:1 aerosense:1 place:1 almost:1 circumvents:1 bound:1 ddp:32 guaranteed:1 quadratic:12 annual:1 occur:1 vectorial:1 constraint:2 ijcnn:1 constrain:1 x2:2 aspect:1 speed:1 argument:2 xki:9 extremely:1 performing:1 designated:1 describes:3 appealing:1 qu:8 modification:2 biologically:1 kuffner:1 taken:1 equation:8 remains:1 turn:1 mechanism:1 wrt:2 initiate:1 imation:1 letting:2 nose:4 generalizes:1 available:3 apply:2 generic:1 save:2 running:1 include:2 morari:1 newton:2 vxk:5 medicine:1 giving:1 approximating:1 rsj:1 society:1 classical:2 contact:1 sweep:6 objective:1 tensor:1 seeking:1 quantity:2 icra:2 yakowitz:1 parametric:1 dependence:1 exhibit:1 gradient:2 kutta:1 distance:3 link:13 simulated:2 manifold:3 spanning:1 assuming:1 ru:1 kalman:4 length:4 index:1 modeled:2 relationship:1 ratio:1 code:1 hebrew:1 coulom:1 nc:1 quu:6 fluid:1 design:3 policy:11 perform:3 snapshot:1 benchmark:1 finite:3 extended:2 rn:1 ucsd:1 dually:1 community:4 introduced:2 inverting:1 pair:4 required:1 specified:1 evidenced:1 thr:1 reintroduced:1 aerospace:1 distinction:1 nip:2 suggested:1 receding:9 dynamical:7 below:3 pattern:2 qux:7 regime:2 program:3 including:1 max:1 power:1 suitable:1 difficulty:1 hybrid:1 force:4 turning:2 minimax:2 scheme:8 library:5 imply:2 review:1 determining:2 relative:6 aerobatic:1 fully:1 expect:1 bear:1 prototypical:1 approximator:1 ingredient:1 generator:2 outlining:1 degree:1 basin:3 principle:1 famously:1 featuring:1 course:1 last:1 cpgs:1 dis:1 allow:1 neighbor:3 munos:2 sparse:2 feedback:3 dimension:9 crawl:1 computes:3 doesn:1 author:1 commonly:1 forward:2 reinforcement:5 adaptive:2 inertia:1 avoided:1 made:2 employing:1 far:2 jump:1 qx:2 atkeson:5 transaction:1 approximate:1 dealing:1 global:9 active:1 instantiation:1 conceptual:1 automatica:1 xi:3 terrain:1 continuous:7 iterative:3 search:1 decade:1 un:2 learn:1 nature:1 robust:5 inherently:2 actuated:1 dispenses:1 obtaining:2 interact:1 expansion:2 futile:1 complex:5 domain:7 significance:1 main:2 dense:1 rh:6 motivation:1 noise:1 x1:7 body:1 aid:1 sub:1 position:3 explicit:1 wish:1 exponential:1 lie:1 specific:4 xt:1 fxk:3 sensing:1 essential:1 adding:1 effectively:1 phd:1 magnitude:1 conditioned:1 horizon:12 cursor:1 cx:2 xcm:2 remi:1 garcia:1 likely:1 lagrange:1 expressed:1 tracking:3 scalar:1 u2:1 goal:3 viewed:1 endeavor:1 towards:1 eventual:1 hard:1 determined:1 yuval:1 engineer:1 total:1 ijspeert:2 accepted:1 diverging:1 indicating:1 select:3 internal:4 scratch:1 |
2,533 | 3,298 | Simulated Annealing: Rigorous finite-time guarantees
for optimization on continuous domains
Andrea Lecchini-Visintini
Department of Engineering
University of Leicester, UK
alv1@leicester.ac.uk
John Lygeros
Automatic Control Laboratory
ETH Zurich, Switzerland.
lygeros@control.ee.ethz.ch
Jan Maciejowski
Department of Engineering
University of Cambridge, UK
jmm@eng.cam.ac.uk
Abstract
Simulated annealing is a popular method for approaching the solution of a global
optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite
set of possible values. We introduce a new general formulation of simulated annealing which allows one to guarantee finite-time performance in the optimization of functions of continuous variables. The results hold universally for any
optimization problem on a bounded domain and establish a connection between
simulated annealing and up-to-date theory of convergence of Markov chain Monte
Carlo methods on continuous domains. This work is inspired by the concept of
finite-time learning with known accuracy and confidence developed in statistical
learning theory.
Optimization is the general problem of finding a value of a vector of variables ? that maximizes
(or minimizes) some scalar criterion U (?). The set of all possible values of the vector ? is called
the optimization domain. The elements of ? can be discrete or continuous variables. In the first case
the optimization domain is usually finite, such as in the well-known traveling salesman problem; in
the second case the optimization domain is a continuous set. An important example of a continuous
optimization domain is the set of 3-D configurations of a sequence of amino-acids in the problem of
finding the minimum energy folding of the corresponding protein [1].
In principle, any optimization problem on a finite domain can be solved by an exhaustive search.
However, this is often beyond computational capacity: the optimization domain of the traveling
salesman problem with 100 cities contains more than 10155 possible tours. An efficient algorithm
to solve the traveling salesman and many similar problems has not yet been found and such problems remain reliably solvable only in principle [2]. Statistical mechanics has inspired widely used
methods for finding good approximate solutions in hard discrete optimization problems which defy
efficient exact solutions [3, 4, 5, 6]. Here a key idea has been that of simulated annealing [3]: a
random search based on the Metropolis-Hastings algorithm, such that the distribution of the elements of the domain visited during the search converges to an equilibrium distribution concentrated
around the global optimizers. Convergence and finite-time performance of simulated annealing on
finite domains has been evaluated in many works, e.g. [7, 8, 9, 10].
On continuous domains, most popular optimization methods perform a local gradient-based
search and in general converge to local optimizers; with the notable exception of convex criteria
where convergence to the unique global optimizer occurs [11]. Simulated annealing performs a
global search and can be easily implemented on continuous domains. Hence it can be considered
a powerful complement to local methods. In this paper, we introduce for the first time rigorous
guarantees on the finite-time performance of simulated annealing on continuous domains. We will
show that it is possible to derive simulated annealing algorithms which, with an arbitrarily high level
of confidence, find an approximate solution to the problem of optimizing a function of continuous
variables, within a specified tolerance to the global optimal solution after a known finite number of
steps. Rigorous guarantees on the finite-time performance of simulated annealing in the optimization of functions of continuous variables have never been obtained before; the only results available
state that simulated annealing converges to a global optimizer as the number of steps grows to infinity, e.g. [12, 13, 14, 15].
The background of our work is twofold. On the one hand, our notion of approximate solution to
a global optimization problem is inspired by the concept of finite-time learning with known accuracy
and confidence developed in statistical learning theory [16, 17]. We actually maintain an important
aspect of statistical learning theory which is that we do not introduce any particular assumption on
the optimization criterion, i.e. our results hold regardless of what U is. On the other hand, we ground
our results on the theory of convergence, with quantitative bounds on the distance to the target distribution, of the Metropolis-Hastings algorithm and Markov Chain Monte Carlo (MCMC) methods,
which has been one of the main achievements of recent research in statistics [18, 19, 20, 21].
In this paper, we will not develop any ready-to-use optimization algorithm. We will instead introduce a general formulation of the simulated annealing method which allows one to derive new
simulated annealing algorithms with rigorous finite-time guarantees on the basis of existing theory.
The Metropolis-Hastings algorithm and the general family of MCMC methods have many degrees
of freedom. The choice and comparison of specific algorithms goes beyond the scope of the paper.
The paper is organized in the following sections. In Simulated annealing we introduce the
method and fix the notation. In Convergence we recall the reasons why finite-time guarantees for
simulated annealing on continuous domains have not been obtained before. In Finite-time guarantees we present the main result of the paper. In Conclusions we state our findings and conclude the
paper.
1
Simulated annealing
The original formulation of simulated annealing was inspired by the analogy between the stochastic
evolution of the thermodynamic state of an annealing material towards the configurations of minimal
energy and the search for the global minimum of an optimization criterion [3]. In the procedure, the
optimization criterion plays the role of the energy and the state of the annealed material is simulated
by the evolution of the state of an inhomogeneous Markov chain. The state of the chain evolves
according to the Metropolis-Hastings algorithm in order to simulate the Boltzmann distribution of
thermodynamic equilibrium. The Boltzmann distribution is simulated for a decreasing sequence of
temperatures (?cooling?). The target distribution of the cooling procedure is the limiting Boltzmann
distribution, for the temperature that tends to zero, which takes non-zero values only on the set of
global minimizers [7].
The original formulation of the method was for a finite domain. However, simulated annealing can be generalized straightforwardly to a continuous domain because the Metropolis-Hastings
algorithm can be used with almost no differences on discrete and continuous domains The main
difference is that on a continuous domain the equilibrium distributions are specified by probability
densities. On a continuous domain, Markov transition kernels in which the distribution of the elements visited by the chain converges to an equilibrium distribution with the desired density can
be constructed using the Metropolis-Hastings algorithm and the general family of MCMC methods
[22].
We point out that Boltzmann distributions are not the only distributions which can be adopted as
equilibrium distributions in simulated annealing [7]. In this paper it is convenient for us to adopt a
different type of equilibrium distribution in place of Boltzmann distributions.
1.1
Our setting
The optimization criterion is U : ? ? [0, 1], with ? ? RN . The assumption that U takes values in
the interval [0, 1] is a technical one. It does not imply any serious loss of generality. In general, any
bounded optimization criterion can be scaled to take values in [0, 1]. We assume that the optimization task is to find a global maximizer; this can be done without loss of generality. We also assume
that ? is a bounded set.
We consider equilibrium distributions defined by probability density functions proportional to
[U (?) + ?]J where J and ? are two strictly positive parameters. We use ? (J) to denote an equilibrium distribution, i.e. ? (J) (d?) ? [U (?) + ?]J ?Leb (d?) where ?Leb is the standard Lebesgue
measure. Here, J ?1 plays the role of the temperature: if the function U (?) plus ? is taken to a
positive power J then as J increases (i.e. as J ?1 decreases) [U (?) + ?]J becomes increasingly
peaked around the global maximizers. The parameter ? is an offset which guarantees that the
equilibrium densities are always strictly positive, even if U takes zero values on some elements
of the domain. The offset ? is chosen by the user and we show later that our results allow one to
make an optimal selection of ?. The zero-temperature distribution is the limiting distribution, for
J ? ?, which takes non-zero values only on the set of global maximizers. It is denoted by ? (?) .
In the generic formulation of the method, the Markov transition kernel of the k-th step of the
inhomogeneous chain has equilibrium distribution ? (Jk ) where {Jk }k=1,2,... is the ?cooling schedule?. The cooling schedule is a non-decreasing sequence of positive numbers according to which
the equilibrium distribution become increasingly sharpened during the evolution of the chain. We
use ?k to denote the state of the chain and P?k to denote its probability distribution. The distribution
P?k obviously depends on the initial condition ?0 . However, in this work, we don?t need to make
this dependence explicit in the notation.
Remark 1: If, given an element ? in ?, the value U (?) can be computed directly, we say that U
is a deterministic criterion, e.g. the energy landscape in protein structure prediction
R [1]. In problems
involving random variables, the value U (?) may be the expected value U (?) = g(x, ?)px (x; ?)dx
of some function g which depends on both the optimization variable ?, and on some random variable x which has probability density px (x; ?) (which may itself depend on ?). In such problems it
is usually not possible to compute U (?) directly, either because evaluation of the integral requires
too much computation, or because no analytical expression for px (x; ?) is available. Typically one
must perform stochastic simulations in order to obtain samples of x for a given ?, hence obtain
sample values of g(x, ?), and thus construct a Monte Carlo estimate of U (?). The Bayesian design
of clinical trials is an important application area where such expected-value criteria arise [23]. The
authors of this paper investigate the optimization of expected-value criteria motivated by problems
of aircraft routing [24]. In the particular case that px (x; ?) does not depend on ?, the optimization
task is often called ?empirical risk minimization?, and is studied extensively in statistical learning
theory [16, 17]. The results of this paper apply in the same way to the optimization of both deterministic and expected-value criteria. The MCMC method developed by M?uller [25, 26] allows one
to construct simulated annealing algorithms for the optimization of expected-value criteria. M?uller
[25, 26] employs the same equilibrium distributions as those described in our setting; in his context
J is restricted to integer values.
2
Convergence
The rationale of simulated annealing is as follows: if the temperature is kept constant, say Jk = J,
then the distribution of the state of the chain P?k tends to the equilibrium distribution ? (J) ; if J ? ?
then the equilibrium distribution ? (J) tends to the zero-temperature distribution ? (?) ; as a result, if
the cooling schedule Jk tends to infinity, one obtains that P?k ?follows? ? (Jk ) and that ? (Jk ) tends
to ? (?) and eventually that the distribution of the state of the chain P?k tends to ? (?) . The theory
shows that, under conditions on the cooling schedule and the Markov transition kernels, the distribution of the state of the chain P?k actually converges to the target zero-temperature distribution
? (?) as k ? ? [12, 13, 14, 15]. Convergence to the zero-temperature distribution implies that
asymptotically the state of the chain eventually coincides with a global optimizer with probability
one.
The difficulty which must be overcome in order to obtain finite step results on simulated annealing algorithms on a continuous domain is that usually, in an optimization problem defined over
continuous variables, the set of global optimizers has zero Lebesgue measure (e.g. a set of isolated
points). If the set of global optimizers has zero measure then the set of global optimizers has null
probability according to the equilibrium distributions ? (J) for any finite J and, as a consequence,
according to the distributions P?k for any finite k. Put another way, the probability that the state of
the chain visits the set of global optimizers is constantly zero after any finite number of steps. Hence
the confidence of the fact that the solution provided by the algorithm in finite time coincides with a
global optimizer is also constantly zero. Notice that this is not the case for a finite domain, where
the set of global optimizers is of non-null measure with respect to the reference counting measure
[7, 8, 9, 10].
It is instructive to look at the issue also in terms of the rate of convergence to the target zerotemperature distribution. On a discrete domain, the distribution of the state of the chain at each
step and the zero-temperature distribution are both standard discrete distributions. It is then possible
to define a distance between them and study the rate of convergence of this distance to zero. This
analysis allows one to obtain results on the finite-time behavior of simulated annealing [7, 8]. On a
continuous domain and for a set of global optimizers of measure zero, the target zero-temperature
distribution ? (?) ends up being a mixture of probability masses on the set of global optimizers. In
this situation, although the distribution of the state of the chain P?k still converges asymptotically
to ? (?) , it is not possible to introduce a sensible distance between the two distributions and a rate
of convergence to the target distribution cannot even be defined (weak convergence), see [12, Theorem 3.3]. This is the reason that until now there have been no guarantees on the performance of
simulated annealing on a continuous domain after a finite number of computations: by adopting the
zero-temperature distribution ? (?) as the target distribution it is only possible to prove asymptotic
convergence in infinite time to a global optimizer.
Remark 2: The standard distance between two distributions, say ?1 and ?2 , on a continuous support is the total variation norm k?1 ? ?2 kT V = supA |?1 (A) ? ?2 (A)|, see e.g. [21]. In simulated
annealing on a continuous domain the distribution of the state of the chain P?k is absolutely continuous with respect to the Lebesgue measure (i.e. ?Leb (A) = 0 ? P?k (A) = 0), by construction
for any finite k. Hence if the set of global optimizers has zero Lebesgue measure then it has zero
measure also according to P?k . The set of global optimizers has however measure 1 according to
? (?) . The distance kP?k ? ? (?) kT V is then constantly 1 for any finite k.
It is also worth mentioning that if the set of global optimizers has zero measure then asymptotic convergence to the zero-temperature distribution ? (?) can be proven only under the additional
assumptions of continuity and differentiability of U [12, 13, 14, 15].
3
Finite-time guarantees
In general, optimization algorithms for problems defined on continuous variables can only find approximate solutions in finite time [27]. Given an element ? of a continuous domain how can we
assess how good it is as an approximate solution to an optimization problem? Here we introduce
the concept of approximate global optimizer to answer this question. The definition is given for
a maximization problem in a continuous but bounded domain. We use two parameters: the value
imprecision ? (greater than or equal to 0) and the residual domain ? (between 0 and 1) which together determine the level of approximation. We say that ? is an approximate global optimizer of U
with value imprecision ? and residual domain ? if the function U takes values strictly greater than
U (?) + ? only on a subset of values of ? no larger than an ? portion of the optimization domain. The
formal definition is as follows.
Definition 1 Let U : ? ? R be an optimization criterion where ? ? RN is bounded. Let ?Leb
denote the standard Lebesgue measure. Let ? ? 0 and ? ? [0, 1] be given numbers. Then ? is an
approximate global optimizer of U with value imprecision ? and residual domain ? if ?Leb {?? ? ? :
U (?? ) > U (?) + ?} ? ? ?Leb (?) .
In other words, the value U (?) is within ? of a value which is greater than the values that U takes
on at least a 1 ? ? portion of the domain. The smaller ? and ? are, the better is the approximation
of a true global optimizer. If both ? and ? are equal to zero then U (?) coincides with the essential
supremum of U .
Our definition of approximate global optimizer carries an important property, which holds regardless of what the criterion U is: if ? and ? have non-zero values then the set of approximate
global optimizers always has non-zero Lebesgue measure. It follows that the probability that the
chain visits the set of approximate global optimizers can be non-zero. Hence, it is sensible to study
the confidence of the fact that the solution found by simulated annealing in finite time is an approximate global optimizer.
Remark 3: The intuition that our notion of approximate global optimizer can be used to obtain
formal guarantees on the finite-time performance of optimization methods based on a stochastic
search of the domain is already apparent in the work of Vidyasagar [17, 28]. Vidyasagar [17, 28]
introduces a similar definition and obtains rigorous finite-time guarantees in the optimization of ex-
pected value criteria based on uniform independent sampling of the domain. Notably, the number
of independent samples required to guarantee some desired accuracy and confidence turns out to be
polynomial in the values of the desired imprecision, residual domain and confidence. Although the
method of Vidyasagar is not highly sophisticated, it has had considerable success in solving difficult
control system design applications [28, 29]. Its appeal stems from its rigorous finite-time guarantees
which exist without the need for any particular assumption on the optimization criterion.
Here we show that finite-time guarantees for simulated annealing can be obtained by selecting a
distribution ? (J) with a finite J as the target distribution in place of the zero-temperature distribution
? (?) . The fundamental result is the following theorem which allows one to select in a rigorous way
? and J in the target distribution ? (J) . It is important to stress that the result holds universally for
any optimization criterion U on a bounded domain. The only minor requirement is that U takes
values in [0, 1].
Theorem 1 Let U : ? ? [0, 1] be an optimization criterion where ? ? RN is bounded. Let
J ? 1 and ? > 0 be given numbers. Let ? be a multivariate random variable with distribution
? (J) (d?) ? [U (?) + ?]J ?Leb (d?). Let ? ? (0, 1] and ? ? [0, 1] be given numbers and define
?=
1
.
J
1 1+?
1+?
1+?
?1
1+
?+1+?
? ?+?
?
(1)
Then the statement ?? is an approximate global optimizer of U with value imprecision ? and residual
domain ?? holds with probability at least ?.
Proof. See Appendix A.
The importance of the choice of a target distribution ? (J) with a finite J is that ? (J) is absolutely
continuous with respect to the Lebesgue measure. Hence, the distance kP?k ? ? (J) kTV between the
distribution of the state of the chain P?k and the target distribution ? (J) is a meaningful quantity.
Convergence of the Metropolis-Hastings algorithm and MCMC methods in total variation norm
is a well studied problem. The theory provides simple conditions under which one derives upper
bounds on the distance to the target distribution which are known at each step of the chain and
decrease monotonically to zero as the number of steps of the chain grows. The theory has been
developed mainly for homogeneous chains [18, 19, 20, 21].
In the case of simulated annealing, the factor that enables us to employ these results is the absolute continuity of the target distribution ? (J) with respect to the Lebesgue measure. However, simulated annealing involves the simulation of inhomogeneous chains. In this respect, another important
fact is that the choice of a target distribution ? (J) with a finite J implies that the inhomogeneous
Markov chain can in fact be formed by a finite sequence of homogeneous chains (i.e. the cooling
schedule {Jk }k=1,2,... can be chosen to be a sequence that takes only a finite set of values). In turn,
this allows one to apply the theory of homogeneous MCMC methods to study the convergence of
P?k to ? (J) in total variation norm.
On a bounded domain, simple conditions on the ?proposal distribution? in the iteration of the simulated annealing algorithm allows one to obtain upper bounds on kP?k ? ? (J) kTV that decrease geometrically to zero as k ? ?, without the need for any additional assumption on U [18, 19, 20, 21].
It is then appropriate to introduce the following finite-time result.
Theorem 2 Let the notation and assumptions of Theorem 1 hold. Let ?k , with distribution P?k , be
the state of the inhomogeneous chain of a simulated annealing algorithm with target distribution
? (J) . Then the statement ??k is an approximate global optimizer of U with value imprecision ? and
residual domain ?? holds with probability at least ? ? kP?k ? ? (J) kTV .
The proof of the theorem follows directly from the definition of the total variation norm.
It follows that if simulated annealing is implemented with an algorithm which converges in total
variation distance to a target distribution ? (J) with a finite J, then one can state with confidence
arbitrarily close to 1 that the solution found by the algorithm after the known appropriate finite
number of steps is an approximate global optimizer with the desired approximation level. For given
non-zero values of ?, ? the value of ? given by (1) can be made arbitrarily close to 1 by choice of
J; while the distance kP?k ? ? (J) kTV can be made arbitrarily small by taking the known sufficient
number of steps.
It can be shown that there exists the possibility of making an optimal choice of ? and J in the
target distribution ? (J) . In fact, for given ? and ? and a given value of J there exists an optimal
choice of ? which maximizes the value of ? given by (1). Hence, it is possible to obtain a desired ?
with the smallest possible J. The advantage of choosing the smallest J, consistent with the required
approximation and confidence, is that it will decrease the number of steps required to achieve the
desired reduction of kP?k ? ? (J) kTV .
4
Conclusions
We have introduced a new formulation of simulated annealing which admits rigorous finite-time
guarantees in the optimization of functions of continuous variables. First, we have introduced the
notion of approximate global optimizer. Then, we have shown that simulated annealing is guaranteed
to find approximate global optimizers, with the desired confidence and the desired level of accuracy,
in a known finite number of steps, if a proper choice of the target distribution is made and conditions
for convergence in total variation norm are met. The results hold for any optimization criterion on a
bounded domain with the only minor requirement that it takes values between 0 and 1.
In this framework, simulated annealing algorithms with rigorous finite-time guarantees can be
derived by studying the choice of the proposal distribution and of the cooling schedule, in the generic
iteration of simulated annealing, in order to ensure convergence to the target distribution in total
variation norm. To do this, existing theory of convergence of the Metropolis-Hastings algorithm and
MCMC methods on continuous domains can be used [18, 19, 20, 21].
Vidyasagar [17, 28] has introduced a similar definition of approximate global optimizer and has
shown that approximate optimizers with desired accuracy and confidence can be obtained with a
number of uniform independent samples of the domain which is polynomial in the accuracy and
confidence parameters. In general, algorithms developed with the MCMC methodology can be
expected to be equally or more efficient than uniform independent sampling.
Acknowledgments
Work supported by EPSRC, Grant EP/C014006/1, and by the European Commission under projects
HYGEIA FP6-NEST-4995 and iFly FP6-TREN-037180. We thank S. Brooks, M. Vidyasagar and
D. M. Wolpert for discussions and useful comments on the paper.
A
Proof of Theorem 1
Let ?
? ? (0, 1] and ? ? (0, 1] be given numbers. Let U? (?) := U (?) + ?. Let ?? be a normalized
measure such that ?? (d?) ? U? (?)?Leb (d?). In the first part of the proof we find a lower bound on
the probability that ? belongs to the set {? ? ? : ?? {?? ? ? : ? U? (?? ) > U? (?)} ? ?
?} .
Let y?? := inf{y : ?? {? ? ? : U? (?) ? y} ? 1 ? ?}.
? To start with we show that the set
{? ? ? : ?? {?? ? ? : ? U? (?? ) > U? (?)} ? ?}
? coincides with {? ? ? : U? (?) ? ? y?? }. Notice
that the quantity ?? {? ? ? : U? (?) ? y} is a right-continuous non-decreasing function of y because
it has the form of a distribution function (see e.g. [30, p.162] and [17, Lemma 11.1]). Therefore we
have ?? {? ? ? : U? (?) ? y?? } ? 1 ? ?
? and
y ? ? y?? ? ?? {?? ? ? : ? U? (?? ) ? y} ? 1 ? ?
? ? ?? {?? ? ? : ? U? (?? ) > y} ? ?
?.
Moreover,
y < ? y?? ? ?? {?? ? ? : ? U? (?? ) ? y} < 1 ? ?
? ? ?? {?? ? ? : ? U? (?? ) > y} > ?
?
and taking the contrapositive one obtains
?? {?? ? ? : ? U? (?? ) > y} ? ?
? ? y ? ? y?? .
Therefore {? ? ? : U? (?) ? ? y?? } ? {? ? ? : ?? {?? ? ? : ? U? (?? ) > U? (?)} ? ?
? }.
We now derive a lower bound on ? (J) {? ? ? : U? (?) ? ? y?? }. Let us introduce the notation
A?? := {? ? ? : U? (?) < y?? }, A??? := {? ? ? : U? (?) ? y?? }, B?,?
:= {? ? ? : U? (?) < ? y?? }
?
??,?
??,?
and B
:= {? ? ? : U? (?) ? ? y?? }. Notice that B?,?
? A?? and A??? ? B
?
?
? . The quantity
?? {? ? ? : U? (?) < y} as a function of y is the left-continuous version of ?? {? ? ? : U? (?) ?
y}[30, p.162]. Hence, the definition of y?? implies ?? (A?? ) ? 1 ? ?
? and ?? (A??? ) ? ?
? . Notice that
??Leb (A?? )
?1??
?,
?? (A?? ) ? 1 ? ?
? ? R
U
(?)?Leb (d?)
? ?
?? (A??? ) ? ?
?
?
Hence, ?Leb (A??? ) > 0 and
(1 + ?)?Leb (A??? )
R
??
?.
U (?)?Leb (d?)
? ?
?Leb (A?? )
1??
?1+?
.
?
?
?
?
?
?Leb (A?? )
??,?
Notice that ?Leb (A??? ) > 0 implies ?Leb (B
? ) > 0. We obtain
? (J) {? ? ? : U? (?) ? ? y?? } =
R
1+ R
?
1+
? J y?J?
y?J?
1
?
U
(?)J ?Leb (d?)
?
B?,?
?
? ?,?
B
?
U? (?)J ?Leb (d?)
1
U
(?)J ?Leb (d?)
?
B?
1 + R ?,?
U? (?)J ?Leb (d?)
??
A
?
R
1
1
1
?
?
.
1
??
?1+?
?
(A
)
?Leb (B?,?
Leb
?
?
? )
1 + ?J
1 + ?J
?
?
?
?Leb (A??? )
?Leb (A??? )
Since {? ? ? : U? (?) ? ? y?? } ? {? ? ? : ?? {?? ? ? : ? U? (?? ) > U? (?)} ? ?
? } the first part of
the proof is complete.
In the second part of the proof we show that the set {? ? ? : ?? {?? ? ? : ? U? (?? ) >
U? (?)} ? ?
? } is contained in the set of approximate global optimizers of U with value imprecision
?? := (??1 ? 1)(1 + ?) and residual domain ?
? := 1+?
? . Hence, we show that {? ? ? : ?? {?? ?
??+? ?
?
?
? : ? U? (? ) > U? (?)} ? ?
? } ? {? ? ? : ?Leb {? ? ? : U (?? ) > U (?) + ??} ? ?
? ?Leb (?)}. We
have
U (?? ) > U (?) + ?? ? ? U? (?? ) > ? [U? (?) + ??] ? ? U? (?? ) > U? (?)
which is proven by noticing that ? [U? (?) + ??] ? U? (?) ? 1 ? ? ? U (?)(1 ? ?)
and U (?) ? [0, 1]. Hence {?? ? ? : ? U? (?? ) > U? (?)} ? {?? ? ? : U (?? ) > U (?) + ??} .
Therefore ?? {?? ? ? : ? U? (?? ) > U? (?)} ? ?
? ? ?? {?? ? ? : U (?? ) > U (?) + ??} ? ?
? . Let
Q?,?? := {?? ? ? : U (?? ) > U (?) + ??} and notice that
Z
U (?? )?Leb (d?? ) + ??Leb (Q?,?? )
Q
.
?? {?? ? ? : U (?? ) > U (?) + ??} = Z?,??
U (?? )?Leb (d?? ) + ??Leb (?)
?
We obtain
?? {?? ? ? : U (?? ) > U (?) + ??} ? ?
? ? ?? ?Leb (Q?,?? ) + ??Leb (Q?,?? ) ? ?
? (1 + ?)?Leb (?)
? ?Leb {?? ? ? : U (?? ) > U (?) + ??} ? ?
? ?Leb (?) .
Hence we can conclude that
?? {?? ? ? : ? U? (?? ) > U? (?)} ? ?
? ? ?Leb {?? ? ? : U (?? ) > U (?) + ??} ? ?
? ?Leb (?)
and the second part of the proof is complete.
We have shown that given ?
? ? (0, 1], ? ? (0, 1], ?? := (??1 ? 1)(1 + ?), ?
? :=
? :=
1+?
??+?
?
? and
1
1
=
,
J
1??
?1+?
1 1+?
1+?
1+?
1+?
?1
1+
?
?
?
?? + 1 + ?
?
? ?? + ?
?
J
the statement ?? is an approximate global optimizer of U with value imprecision ?? and residual
domain ?
? ? holds with probability at least ?. Notice that ?? ? [0, 1] and ?
? ? (0, 1] are linked through
??+?
a bijective relation to ? ? [ 1+?
? ? (0, 1+?
]. The statement of the theorem is eventually
2+? , 1] and ?
obtained by expressing ? as a function of desired ?? = ? and ?
? = ?.
References
[1] D. J. Wales. Energy Landscapes. Cambridge University Press, Cambridge, UK, 2003.
[2] D. Achlioptas, A. Naor, and Y. Peres. Rigorous location of phase transitions in hard optimization problems. Nature, 435:759?764, 2005.
[3] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi.
220(4598):671?680, 1983.
Optimization by Simulated Annealing.
Science,
[4] E. Bonomi and J. Lutton. The N -city travelling salesman problem: statistical mechanics and the Metropolis algorithm. SIAM Rev., 26(4):551?568, 1984.
[5] Y. Fu and P. W. Anderson. Application of statistical mechanics to NP-complete problems in combinatorial
optimization. J. Phys. A: Math. Gen., 19(9):1605?1620, 1986.
[6] M. M?ezard, G. Parisi, and R. Zecchina. Analytic and Algorithmic Solution of Random Satisfiability
Problems. Science, 297:812?815, 2002.
[7] P. M. J. van Laarhoven and E. H. L. Aarts. Simulated Annealing: Theory and Applications. D. Reidel
Publishing Company, Dordrecht, Holland, 1987.
[8] D. Mitra, F. Romeo, and A. Sangiovanni-Vincentelli. Convergence and finite-time behavior of simulated
annealing. Adv. Appl. Prob., 18:747?771, 1986.
[9] B. Hajek. Cooling schedules for optimal annealing. Math. Oper. Res., 13:311?329, 1988.
[10] J. Hannig, E. K. P. Chong, and S. R. Kulkarni. Relative Frequencies of Generalized Simulated Annealing.
Math. Oper. Res., 31(1):199?216, 2006.
[11] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004.
[12] H. Haario and E. Saksman. Simulated annealing process in general state space. Adv. Appl. Prob., 23:866?
893, 1991.
[13] S. B. Gelfand and S. K. Mitter. Simulated Annealing Type Algorithms for Multivariate Optimization.
Algorithmica, 6:419?436, 1991.
[14] C. Tsallis and D. A. Stariolo. Generalized simulated annealing. Physica A, 233:395?406, 1996.
[15] M. Locatelli. Simulated Annealing Algorithms for Continuous Global Optimization: Convergence Conditions. J. Optimiz. Theory App., 104(1):121?133, 2000.
[16] V. N. Vapnik. The Nature of Statistical Learning Theory. Cambridge University Press, Springer, New
York, US, 1995.
[17] M. Vidyasagar. Learning and Generalization: With Application to Neural Networks. Springer-Verlag,
London, second edition, 2003.
[18] S. P. Meyn and R. L. Tweedie. Markov Chains and Stochastic Stability. Springer-Verlag, London, 1993.
[19] J. S. Rosenthal. Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo. J. Am.
Stat. Assoc., 90(430):558?566, 1995.
[20] K. L. Mengersen and R. L. Tweedie. Rates of convergence of the Hastings and Metropolis algorithm.
Ann. Stat., 24(1):101?121, 1996.
[21] G. O. Roberts and J. S. Rosenthal. General state space Markov chains and MCMC algorithms. Prob.
Surv., 1:20?71, 2004.
[22] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer-Verlag, New York, second edition,
2004.
[23] D.J. Spiegelhalter, K.R. Abrams, and J.P. Myles. Bayesian approaches to clinical trials and health-care
evaluation. John Wiley & Sons, Chichester, UK, 2004.
[24] A. Lecchini-Visintini, W. Glover, J. Lygeros, and J. M. Maciejowski. Monte Carlo Optimization for
Conflict Resolution in Air Traffic Control. IEEE Trans. Intell. Transp. Syst., 7(4):470?482, 2006.
[25] P. M?uller. Simulation based optimal design. In J. O. Berger, J. M. Bernardo, A. P. Dawid, and A. F. M.
Smith, editors, Bayesian Statistics 6: proceedings of the Sixth Valencia International Meeting, pages
459?474. Oxford: Clarendon Press, 1999.
[26] P. M?uller, B. Sans?o, and M. De Iorio. Optimal Bayesian design by Inhomogeneous Markov Chain Simulation. J. Am. Stat. Assoc., 99(467):788?798, 2004.
[27] L. Blum, C. Cucker, M. Shub, and S. Smale. Complexity and Real Computation. Springer-Verlag, New
York, 1998.
[28] M. Vidyasagar. Randomized algorithms for robust controller synthesis using statistical learning theory.
Automatica, 37(10):1515?1528, 2001.
[29] R. Tempo, G. Calafiore, and F. Dabbene. Randomized Algorithms for Analysis and Control of Uncertain
Systems. Springer-Verlag, London, 2005.
[30] B.V. Gnedenko. Theory of Probability. Chelsea, New York, fourth edition, 1968.
| 3298 |@word aircraft:1 trial:2 version:1 polynomial:2 norm:6 simulation:4 eng:1 carry:1 reduction:1 myles:1 configuration:2 contains:1 initial:1 selecting:1 ktv:5 existing:3 yet:1 dx:1 must:2 john:2 enables:1 analytic:1 haario:1 smith:1 provides:1 math:3 location:1 minorization:1 glover:1 constructed:1 become:1 prove:1 naor:1 wale:1 introduce:9 notably:1 expected:6 behavior:2 andrea:1 mechanic:3 inspired:4 decreasing:3 company:1 becomes:1 provided:1 project:1 bounded:9 notation:4 maximizes:2 mass:1 moreover:1 null:2 what:2 minimizes:1 developed:5 finding:4 guarantee:17 quantitative:1 zecchina:1 bernardo:1 scaled:1 assoc:2 uk:7 control:5 grant:1 before:2 positive:4 engineering:2 local:3 mitra:1 tends:6 consequence:1 oxford:1 plus:1 studied:2 appl:2 mentioning:1 tsallis:1 unique:1 acknowledgment:1 optimizers:17 procedure:2 jan:1 area:1 empirical:1 eth:1 convenient:1 boyd:1 confidence:12 word:1 jmm:1 protein:2 cannot:1 close:2 selection:1 bonomi:1 put:1 risk:1 context:1 deterministic:2 leb:38 annealed:1 go:1 regardless:2 convex:2 resolution:1 meyn:1 vandenberghe:1 his:1 stability:1 notion:3 variation:7 limiting:2 target:19 play:2 construction:1 user:1 exact:1 gnedenko:1 homogeneous:3 surv:1 element:6 dawid:1 jk:7 cooling:9 ep:1 role:2 epsrc:1 solved:1 laarhoven:1 adv:2 sangiovanni:1 decrease:4 intuition:1 complexity:1 cam:1 ezard:1 depend:2 solving:1 basis:1 iorio:1 easily:1 london:3 monte:6 kp:6 choosing:1 exhaustive:1 dordrecht:1 apparent:1 gelfand:1 widely:1 solve:1 larger:1 say:4 statistic:2 itself:1 obviously:1 sequence:5 advantage:1 parisi:1 analytical:1 date:1 gen:1 achieve:1 achievement:1 saksman:1 convergence:22 requirement:2 converges:6 derive:3 develop:1 ac:2 stat:3 minor:2 implemented:2 involves:1 implies:4 met:1 switzerland:1 inhomogeneous:6 stochastic:4 routing:1 material:2 fix:1 generalization:1 strictly:3 sans:1 physica:1 hold:9 around:2 considered:1 ground:1 calafiore:1 equilibrium:15 scope:1 algorithmic:1 optimizer:18 adopt:1 smallest:2 combinatorial:2 visited:2 city:2 minimization:1 uller:4 always:2 derived:1 romeo:1 mainly:1 rigorous:10 am:2 minimizers:1 typically:1 relation:1 issue:1 denoted:1 equal:2 construct:2 never:1 sampling:2 look:1 peaked:1 np:1 serious:1 employ:2 intell:1 phase:1 algorithmica:1 lebesgue:8 maintain:1 freedom:1 investigate:1 highly:1 possibility:1 evaluation:2 chong:1 introduces:1 mixture:1 kirkpatrick:1 chichester:1 chain:29 kt:2 integral:1 fu:1 tweedie:2 desired:10 re:2 isolated:1 minimal:1 uncertain:1 maximization:1 subset:1 tour:1 uniform:3 too:1 commission:1 straightforwardly:1 answer:1 density:5 fundamental:1 siam:1 international:1 randomized:2 cucker:1 together:1 synthesis:1 sharpened:1 nest:1 oper:2 syst:1 de:1 notable:1 depends:2 later:1 linked:1 traffic:1 portion:2 start:1 contrapositive:1 ass:1 formed:1 air:1 accuracy:6 acid:1 landscape:2 weak:1 bayesian:4 carlo:6 worth:1 app:1 aarts:1 phys:1 casella:1 definition:8 sixth:1 energy:5 frequency:1 proof:7 popular:2 recall:1 satisfiability:1 organized:1 schedule:7 hajek:1 sophisticated:1 actually:2 transp:1 clarendon:1 methodology:1 formulation:6 evaluated:1 done:1 generality:2 anderson:1 achlioptas:1 until:1 traveling:3 hand:2 hastings:9 maximizer:1 continuity:2 grows:2 concept:3 true:1 normalized:1 evolution:3 hence:12 imprecision:8 laboratory:1 during:2 coincides:4 criterion:19 generalized:3 stress:1 bijective:1 complete:3 performs:1 temperature:13 expressing:1 cambridge:6 automatic:1 had:1 multivariate:2 chelsea:1 recent:1 optimizing:1 belongs:1 inf:1 verlag:5 arbitrarily:4 success:1 meeting:1 minimum:2 additional:2 greater:3 care:1 converge:1 determine:1 monotonically:1 thermodynamic:2 stem:1 technical:1 clinical:2 equally:1 visit:2 vincentelli:1 prediction:1 involving:1 controller:1 iteration:2 kernel:3 adopting:1 folding:1 proposal:2 background:1 annealing:46 interval:1 comment:1 valencia:1 integer:1 ee:1 counting:1 approaching:1 idea:1 expression:1 motivated:1 york:4 remark:3 useful:1 extensively:1 concentrated:1 differentiability:1 exist:1 notice:7 rosenthal:2 discrete:6 key:1 blum:1 kept:1 asymptotically:2 geometrically:1 fp6:2 prob:3 noticing:1 powerful:1 fourth:1 place:2 family:2 almost:1 appendix:1 bound:5 guaranteed:1 pected:1 infinity:2 locatelli:1 optimiz:1 aspect:1 maciejowski:2 simulate:1 vecchi:1 px:4 department:2 according:6 remain:1 smaller:1 increasingly:2 son:1 metropolis:10 evolves:1 making:1 rev:1 restricted:1 taken:1 zurich:1 turn:2 eventually:3 end:1 travelling:1 salesman:4 available:2 adopted:1 studying:1 apply:3 generic:2 appropriate:2 gelatt:1 tempo:1 original:2 ensure:1 publishing:1 establish:1 question:1 already:1 occurs:1 quantity:3 dependence:1 abrams:1 gradient:1 distance:10 thank:1 simulated:46 capacity:1 sensible:2 reason:2 berger:1 difficult:1 robert:2 statement:4 smale:1 reidel:1 design:4 reliably:1 proper:1 boltzmann:5 shub:1 perform:2 upper:2 markov:11 finite:45 situation:1 peres:1 rn:3 supa:1 introduced:3 complement:1 required:3 specified:2 connection:1 conflict:1 brook:1 trans:1 beyond:2 usually:3 power:1 vidyasagar:7 difficulty:1 solvable:1 residual:8 spiegelhalter:1 imply:1 ready:1 health:1 lygeros:3 asymptotic:2 relative:1 loss:2 rationale:1 proportional:1 analogy:1 proven:2 degree:1 sufficient:1 consistent:1 principle:2 editor:1 supported:1 formal:2 allow:1 taking:2 absolute:1 tolerance:1 van:1 overcome:1 transition:4 author:1 made:3 universally:2 mengersen:1 approximate:22 obtains:3 supremum:1 global:43 automatica:1 conclude:2 don:1 continuous:32 search:7 why:1 nature:2 defy:1 robust:1 european:1 domain:46 main:3 arise:1 edition:3 amino:1 mitter:1 wiley:1 explicit:1 theorem:8 specific:1 offset:2 appeal:1 admits:1 maximizers:2 essential:1 derives:1 exists:2 vapnik:1 importance:1 dabbene:1 wolpert:1 contained:1 scalar:1 holland:1 springer:6 ch:1 constantly:3 ann:1 towards:1 twofold:1 considerable:1 hard:2 infinite:1 lemma:1 called:2 total:7 meaningful:1 exception:1 select:1 support:1 ethz:1 kulkarni:1 absolutely:2 mcmc:9 instructive:1 ex:1 |
2,534 | 3,299 | A neural network implementing optimal state
estimation based on dynamic spike train decoding
Omer Bobrowski1 , Ron Meir1 , Shy Shoham2 and Yonina C. Eldar1
Department of Electrical Engineering1 and Biomedical Engineering2
Technion, Haifa 32000, Israel
{bober@tx},{rmeir@ee},{sshoham@bm},{yonina@ee}.technion.ac.il
Abstract
It is becoming increasingly evident that organisms acting in uncertain dynamical
environments often employ exact or approximate Bayesian statistical calculations
in order to continuously estimate the environmental state, integrate information
from multiple sensory modalities, form predictions and choose actions. What is
less clear is how these putative computations are implemented by cortical neural
networks. An additional level of complexity is introduced because these networks
observe the world through spike trains received from primary sensory afferents,
rather than directly. A recent line of research has described mechanisms by which
such computations can be implemented using a network of neurons whose activity directly represents a probability distribution across the possible ?world states?.
Much of this work, however, uses various approximations, which severely restrict the domain of applicability of these implementations. Here we make use of
rigorous mathematical results from the theory of continuous time point process
filtering, and show how optimal real-time state estimation and prediction may be
implemented in a general setting using linear neural networks. We demonstrate
the applicability of the approach with several examples, and relate the required
network properties to the statistical nature of the environment, thereby quantifying the compatibility of a given network with its environment.
1
Introduction
A key requirement of biological or artificial agents acting in a random dynamical environment is
estimating the state of the environment based on noisy observations. While it is becoming clear that
organisms employ some form of Bayesian inference, it is not yet clear how the required computations may be implemented in networks of biological neurons. We consider the problem of a system,
receiving multiple state-dependent observations (possibly arising from different sensory modalities)
in the form of spike trains, and construct a neural network which, based on these noisy observations,
is able to optimally estimate the probability distribution of the hidden world state.
The present work continues a line of research attempting to provide a probabilistic Bayesian framework for optimal dynamical state estimation by biological neural networks. In this framework, first
formulated by Rao (e.g., [8, 9]), the time-varying probability distributions are represented in the
neurons? activity patterns, while the network?s connectivity structure and intrinsic dynamics are
responsible for performing the required computation. Rao?s networks use linear dynamics and discrete time to approximately compute the log-posterior distributions from noisy continuous inputs
(rather than actual spike trains). More recently, Beck and Pouget [1] introduced networks in which
the neurons directly represent and compute the posterior probabilities (rather than their logarithms)
from discrete-time approximate firing rate inputs, using non-linear mechanisms such as multiplicative interactions and divisive normalization. Another relevant line of work, is that of Brown and
colleagues as well as others (e.g., [4, 11, 13]) where approximations of optimal dynamical estima1
tors from spike-train based inputs are calculated, however, without addressing the question of neural
implementation.
Our approach is formulated within a continuous time point process framework, circumventing many
of the difficulties encountered in previous work based on discrete time approximations and input
smoothing. Moreover, using tools from the theory of continuous time point process filtering (e.g.,
[3]), we are able to show that a linear system suffices to yield the exact posterior distribution for
the state. The key element in the approach is switching from posterior distributions to a new set
of functions which are simply non-normalized forms of the posterior distribution. While posterior
distributions generally obey non-linear differential equations, these non-normalized functions obey
a linear set of equations, known as the Zakai equations [15]. Intriguingly, these linear equations
contain the full information required to reconstruct the optimal posterior distribution! The linearity
of the exact solution provides many advantages of interpretation and analysis, not least of which is
an exact solution, which illustrates the clear distinction between observation-dependent and independent contributions. Such a separation leads to a characterization of the system performance in
terms of prior knowledge and real-time observations. Since the input observations appear directly
as spike trains, no temporal information is lost. The present formulation allows us to consider inputs arising from several sensory modalities, and to determine the contribution of each modality to
the posterior estimate, thereby extending to the temporal domain previous work on optimal multimodal integration, which was mostly restricted to the static case. Inherent differences between the
modalities, related to temporal delays and different shapes of tuning curves can be incorporated and
quantified within the formalism.
In a historical context we note that a mathematically rigorous approach to point process based filtering was developed during the early 1970s following the seminal work of Wonham [14] for finite
state Markov processes observed in Gaussian noise, and of Kushner [7] and Zakai [15] for diffusion
processes. One of the first papers presenting a mathematically rigorous approach to nonlinear filtering in continuous time based on point process observations was [12], where the exact nonlinear
differential equations for the posterior distributions are derived. The presentation in Section 4 summarizes the main mathematical results initiated by the latter line of research, adapted mainly from
[3], and serves as a convenient starting point for many possible extensions.
2
A neural network as an optimal filter
Consider a dynamic environment characterized at time t by a state Xt , belonging to a set of N
states, namely Xt ? {s1 , s2 , . . . , sN }. We assume the state dynamics is Markovian with generator matrix Q. The matrix Q, [Q]ij = qij , is defined [5] by requiring that for small values of h,
Pr[Xt+h = si |Xt = si ] = 1 + qii h + P
o(h) and Pr[Xt+h = sj |Xt = si ] = qij h + o(h) for j 6= i.
The normalization requirement is that j qij = 0. This matrix controls the process? infinitesimal
progress according to ?(t)
?
= ?(t)Q, where ?i (t) = Pr[Xt = si ].
The state Xt is not directly observable, but is only sensed through a set of M random state-dependent
(k)
(k)
observation point processes {Nt }M
to represent the spiking
k=1 . We take each point process Nt
activity of the k-th sensory cell, and assume these processes to be doubly stochastic Poisson counting
processes1 with state-dependent rates ?k (Xt ). These processes are assumed to be independent,
given the current state Xt . The objective of state estimation (a.k.a. nonlinear filtering) is to obtain a
differential equation for the posterior probabilities
h
i
4
(1)
(M )
pi (t) = Pr Xt = si N[0,t] , . . . , N[0,t] ,
(1)
(k)
4
(k)
where N[0,t] = {Ns }0?s?t . In the sequel we denote Y0t =
reader to Section 4 for precise mathematical definitions.
n
o
(1)
(M )
N[0,t] , . . . , N[0,t] , and refer the
We interpret the rate ?k as providing the tuning curve for the k-th sensory input. In other words,
the k-th sensory cell responds with strength ?k (si ) when the input state is Xt = si . The required
differential equations for pi (t) are considerably simplified, with no loss of information [3], by conPN
sidering a set of non-normalized ?probability functions? ?i (t), such that pi (t) = ?i (t)/ j=1 ?j (t).
1
Implying that the rate function itself is a random process.
2
Based on the theory presented in Section 4 we obtain
?? i (t) =
N
X
Qji ?j (t) +
j=1
M
X
(?k (si ) ? 1)
k=1
"
X
#
?(t ? tkn ) ? 1 ?i (t),
n
(2)
where {tkn } denote the spiking times of the k-th sensory cell. This equation can be written in vector
form by defining
?k = diag(?k (s1 ) ? 1, ?k (s2 ) ? 1, . . . ?k (sN ) ? 1)
;
?=
M
X
?k ,
(3)
k=1
and ? = (?1 , . . . , ?N ), leading to
?
?(t)
= (Q ? ?)> ?(t) +
M
X
k=1
?k
X
?(t ? tkn )?(t).
(4)
n
Equations (2) and (4) can be interpreted as the activity of a linear neural network, where ?i (t)
represents the firing rate of neuron i at time t, and the matrix (Q ? ?)> represents the synaptic
weights (including self-weights); see Figure 1 for a graphical display of the network. Assuming
that the tuning functions ?k are unimodal, decreasing in all directions from some maximal value
(e.g., Gaussian or truncated cosine functions), we observe from (2) that the impact of an input spike
at time t is strongest on cell i for which ?k (si ) is maximal, and decreases significantly for cells
j for which sj is ?far? from si . This effect can be modelled using excitatory/inhibitory connections, where neurons representing similar states excite each other, while neurons corresponding to
very different states inhibit each other (e.g., [2]). This issue will be elaborated on in future work.
Several observations are in place regarding (4). (i) The solution of (4) provides the optimal posterior state estimator
given the spike train observations, i.e., no approximation is involved. (ii) The equations are linear even though the equations
obeyed by the posterior probabilities pi (t) are nonlinear. (iii)
The temporal evolution breaks up neatly into an observationindependent term, which can be conceived of as implementing
a Bayesian dynamic prior, and an observation-dependent term,
which contributes each time a spike occurs. Note that a similar structure was observed recently in [1]. (iv) The observation
process affects the posterior estimate through two terms. First,
input processes with strong spiking activity, affect the activity
more strongly. Second, the k-th input affects most strongly the Figure 1: A graphical depiction of
components of ?(t) corresponding to states with large values the network implementing optimal
of the tuning curve ?k (si ). (v) At this point we assume that the filtering of M spike train inputs.
matrix Q is known. In a more general setting, one can expect
Q to be learned on a slower time scale, through interaction
with the environment. We leave this as a topic for future work.
Multi-modal inputs A multi-modal scenario may be envisaged as one in which a subset of the sensory inputs arises from one modality (e.g., visual) while the remaining inputs arise from a different
sensory modality (e.g., auditory). These modalities may differ in the shapes of their receptive fields,
their response latencies, etc. The framework developed above is sufficiently general to deal with
any number of modalities, but consider for simplicity just two modalities, denoted by V and A. It is
straightforward to extend the derivation of (4), leading to
(M
)
Ma
v
X
X
X
X
?
?(t)
= (Q ? ?v ? ?a )> ?(t) +
?vk
?(t ? tv,k
?ak
?(t ? ta,k
(5)
n )+
n ) ?(t).
k=1
n
k=1
n
Prediction The framework can easily be extended to prediction, defined as the problem of calculating the future posterior distribution phi (t) = Pr[Xt+h = si |Y0t ]. It is easy to show that the
3
non-normalized probabilities ?h (t) can be calculated using the vector differential equation
? > ?h (t) +
?? h (t) = (Q ? ?)
M
X
?k
?
X
?(t ? tkn )?h (t),
(6)
n
k=1
>
? k = ehQ> ?k e?hQ> . Interestingly, the
with the initial condition ?h (0) = ehQ ?(0), and where ?
equations obtained are identical to (4), except that the system parameters are modified.
Simplified equation When the tuning curves of the sensory cells are uniformly distributed Gaussians (e.g., spatial receptive fields), namely ?k (x) = ?max exp(?(x?k?x)2 /2? 2 ), it can be shown
PM
[13] that for smallPenough ?x, and a large number of sensory cells, k=1 ?k (x) ? ? for all x, implying that ? = k ?k ? (? ? M )I. Therefore the matrix ? has no effect on the solution of (4),
except for an exponential attenuation that is applied to all the cells simultaneously. Therefore, in
cases where the number of sensory cells is large, ? can be omitted from (4). This means that between spike arrivals, the system behaves solely according to the a-priori knowledge about the world,
and when a spike arrives, this information is reshaped according to the firing cell?s tuning curve.
3
Theoretical Implications and Applications
Viewing (4) we note that between spike arrivals, the input has no effect on the system. Therefore,
the inter-arrival dynamics is simply ?(t)
? = (Q ? ?)> ?(t). Defining tn as the n-th arrival time of a
spike from any one of the sensors, the solution in the interval (tn , tn+1 ) is
>
?(t) = e(t?tn )(Q??) ?(tn ).
When a new spike arrives from the k-th sensory neuron at time tn the system is modified within an
infinitesimal window of time as
?
?
?
?i (t+
n ) = ?i (tn ) + ?i (tn )(?k (si ) ? 1) = ?i (tn )?k (si ).
(7)
Thus, at the exact time of a spike arrival from the k-th sensory cell, the vector ? is reshaped according
to the tuning curve of the input cell that fired this spike. Assuming n spikes occurred before time t,
we can derive an explicit solution to (4), given by
n
> Y
>
?(t) = e(t?tn )(Q??)
(I + ?k(ti ) )e(ti ?ti?1 )(Q??) ?(0),
(8)
i=1
where k(ti ) is the index of the cell that fired at ti , I is the identity matrix, and we assumed initial
conditions ?(0) at t0 = 0.
3.1
Demonstrations
We demonstrate the operation of the system on several synthetic examples. First consider a small
object moving back and forth on a line, jumping between a set of discrete states, and being observed by a retina with M sensory cells. Each world state si describes the particle?s position,
and each sensory cell k generates a Poisson spike train with rate ?k (Xt ), taken to be a Gaussian
?max exp (?(x ? xk )2 /2? 2 ). Figure 2(a) displays the motion of the particle for a specific choice of
matrix Q, and 2(b) presents the spiking activity of 10 position sensitive sensory cells. Finally, Figure
2(c) demonstrates the tracking ability of the system, where the width of the gray trace corresponds
to the prediction confidence. Note that the system is able to distinguish between 25 different states
rather well with only 10 sensory cells.
In order to enrich the systems?s estimation capabilities, we can include additional parameters within
the state of the world. Considering the previous example, we create a larger set of states: s?ij =
(si , dj ), where dj denotes the current movement direction (in this case d1 =up, d2 =down). We add
a population of sensory cells that respond differently to different movement directions. This lends
further robustness to the state estimation. As can be seen in Figure 2(d)-(f), when for some reason the
input of the sensory cells is blocked (and the sensory cells fire spontaneously) the system estimates
a movement that continues in the same direction. When the blockade is removed, the system is resynchronized with the input. It can be seen that even during periods where sensory input is absent,
the general trend is well predicted, even though the estimated uncertainty is increased.
4
By expanding the state space it is also possible for the system to track multiple objects simultaneously. In figure 2(g)-(i) we present tracking of two simultaneously moving objects. This is done
simply by creating a new state space, sij = (s1i , s2j ), where ski denotes the state (location) of the
k-th object.
(a) Object trajectory
(d) Object trajectory
(g) Object trajectory
10
0
0
1
2
3
4
5
6
7
8
20
10
0
0
9
x[cm]
x[cm]
x[cm]
10
20
1
2
3
4
5
6
7
8
2
3
4
5
6
t[sec]
t[sec]
t[sec]
(e) Input activity
(h) Input activity
7
8
9
7
8
9
2
3
4
5
6
7
8
5
0
0
9
1
2
3
4
5
6
7
8
5
0
0
9
1
2
3
4
5
6
t[sec]
t[sec]
t[sec]
(c) Posterior probability evolution
(f) Posterior probability evolution
(i) Posterior probability evolution
25
20
15
10
5
2
4
6
8
10
t[sec]
25
20
15
10
5
0
10
x[cm]
1
10
cell #
cell #
10
x[cm]
cell #
x[cm]
1
(b) Input activity
5
0
Object #1
Object #2
0
0
9
10
0
0
5
2
4
6
t[sec]
8
10
8
6
4
2
0
2
4
6
8
10
t[sec]
Figure 2: Tracking the motion of an object in 1D. (a) The object?s trajectory. (b) Spiking activity
of 10 sensory cells. (c) Decoded position estimation with confidence interval. Each of the 10
sensory cells has a Gaussian tuning curve of width ? = 2 and maximal firing rate ?max = 25.(d)-(f)
Tracking based on position and direction information. The red bar marks the time when the input
was blocked, and the green bar marks the time when the blockade was removed. Here we used 10
place-cells and 4 direction-cells (marked
in red). (g)-(i) Tracking
of two objets simultaneously. The
network activity in (i) represents Pr Xt1 = si ? Xt2 = si |Y0t .
3.2
Behavior Characterization
The solution of the filtering equations (4) depends on two processes, namely the recurrent dynamics
due to the first term, and the sensory input arising from the second term. Recall that the connectivity
matrix Q is essentially the generator matrix of the state transition process, and as such, incorporates
prior knowledge about the world dynamics. The second term, consisting of the sensory input, contributes to the state estimator update every time a spike occurs. Thus, a major question relates to
the interplay between the a-priori knowledge embedded in the network through Q and the incoming sensory input. In particular, an important question relates to tailoring the system parameters
(e.g., the tuning curves ?k ), to the properties of the external world. As a simple characterization
of the generator matrix Q, we consider the diagonal and non-diagonal terms. The diagonal term
qii is related to the average time
spent in state i through E[Ti ] = ?1/qii [5], and thus we define
?1
?1
? (Q) = ? q11
+ ? ? ? + qN
N /N , as a measure of the transition frequency of the process, where
small values of ? correspond to a rapidly changing process. A second relevant measure relates to
the regularity in the transition between the states. To quantify this consider a state i, and define a
probability vector qi consisting of the N ? 1 elements {Qij }, j 6= i, normalized so that the sum
of the elements is 1. The entropy of qi is a measure for the state transition irregularity from state i,
and we define H(Q) as the average of this entropy over all states. In summary, we lump the main
properties of Q into ? (Q), related to the rapidity of the process, and H(Q), measuring the transition
regularity. Clearly, these variables are but one heuristic choice for characterizing the Markov process dynamics, but they will enable us to relate the ?world dynamics? to the system behavior. The
sensory input influence on the system is controlled by the tuning curves. To simplify the analysis we
assume uniformly placed Gaussian tuning curves, ?k (x) = ?max exp (?(x ? k?x)2 /2? 2 ), which
can be characterized by two parameters - the maximum value ?max and the width ?. Note, however
that our model does not require any special constraints on the tuning curves.
Figure 3 examines the system performance under different world setups. We measure the performance using the L1 error of the maximum aposteriori (MAP) estimator built from the posterior
distribution generated by the system. The MAP estimator is obtained by selecting the cell with the
highest firing activity ?i (t), is optimal under the present setting (leading to the minimal probability
of error), and can be easily implemented in a neural network by a Winner-Take-All circuit. The
choice of the L1 error is justified in this case since the states {si } represent locations on a line,
5
thereby endowing the state space with a distance measure. In figure 3(a) we can see that as ? (Q)
increases, the error diminishes, an expected result, since slower world dynamics are easier to analyze. The effect of H(Q) is opposite - lower entropy implies higher confidence in the next state.
Therefore we can see that the error increases with H(Q) (fig. 3(b)). The last issue we examine
relates to the behavior of the system when an incorrect Q matrix is used (i.e., the world model is
incorrect). It is clear from figure 3(c) that for low values of M (the number of sensory cells), using
the wrong Q matrix increases the error level significantly. However as the value of M increases, the
differences are reduced. This phenomenon is expected, since the more observations are available
about the world, the less weight need be assigned to the a-priori knowledge.
1
1.6
0.8
1.4
Correct model
Wrong Q?matrix, same ?(Q)
Wrong Q?matrix, different ?(Q)
10
0.2
0
0
1.2
1
0.8
2
4
?(Q)
6
8
10
6
1
0.4
L Error
L1 Error
L1 Error
8
0.6
4
2
1
1.5
2
2.5
3
3.5
0
0
4
100
200
H(Q)
(a) Effect of state rapidity
300
400
500
M
(b) Effect of transition entropy
(c) Effect of misspecification
Figure 3: State estimation error for different world dynamics and model misspecification. For (a)
and (b) M = 17, N = 17, ? = 3, ?max = 50, and for (c) N = 25, ? = 3, ?max = 50.
In figure 4 we examine the effect of the tuning curve parameters on the system?s performance. Given
a fixed number of input cells, if the tuning curves are too narrow (fig. 4(a) top), they will not cover
the entire state space. On the other hand, if the tuning curves are too wide (fig. 4(a) bottom) the cell?s
response is very similar for all states. Therefore we get an error function that has a local minimum
(fig. 4(b)). It remains for future work to determine what is the optimal value of ? for a given model.
The effect of different values of ?max is obvious - higher values of ?max lead to more spikes per
sensory cell which increases the system?s accuracy. Clearly, under physiological conditions, where
high firing rates are energetically costly, we would expect a tradeoff between accuracy and energy
expenditure.
k
? (x)
low ?
40
20
0
0
10
20
30
40
50
x[cm]
60
70
80
90
100
medium ?
20
20
30
40
50
x[cm]
60
70
80
90
L1 Error
10
100
40
= 25
?max = 10
8
6
4
k
? (x)
high ?
= 50
?
max
10
0
0
?
max
12
k
? (x)
14
40
20
0
0
2
10
20
30
40
50
x[cm]
60
70
80
90
100
0
?2
?1
0
1
2
3
log(?)
(a)
(b)
Figure 4: The effect of the tuning curves parameters on performance.
4
Mathematical Framework and Derivations
We summarize the main mathematical results related to point process filtering, adapted mainly from
[3]. Consider a finite-state continuous-time Markov process Xt ? {s1 , s2 , . . . , sN } with a generator matrix Q that is being observed via a set of (doubly stochastic) Poisson processes with statedependent rate functions ?k (Xt ), k = 1, . . . , M .
Consider first a single point process observation N0t = {Ns }0?s?t . We denote the joint probability
law for the state and observation process by P1 . The objective is to derive a differential equation for
the posterior probabilities (1). This is the classic nonlinear filtering problem from systems theory
6
(e.g. [6]). More generally, the problem can be phrased as computing E1 [f (Xt )|N0t ], where, in the
case of (1), f is a vector function, with components fi (x) = [x = si ].
We outline the derivation required to obtain such an equation, using a method referred to as
change of measure (e.g., [3]). The basic idea is to replace the computationally hard evaluation
of E1 [f (Xt )|N0t ], by a tractable computation based on a simple probability law. Consider two
probability spaces (?, F, P0 ) and (?, F, P? ) that differ only in their probability measures. P1
is said to be absolutely continuous with respect to P0 (denoted by P1 P0 ), if for all A ? F,
P0 (A) = 0 ? P1 (A) = 0. Assuming P1 P0 , it can be proved that there exists a random variable
L(?), ? ? ?, such that for all A ? F,
Z
P1 (A) = E0 [1A L] =
L(?)dP0 (?),
(9)
A
where E0 denotes the expectation with regard to P0 . The random variable L is called the RadonNykodim derivative of P1 with respect to P0 , and is denoted by L(?) = dP1 (?)/dP0 (?).
Consider two continuous-time random processes - Xt ,Nt , that have different interpretation under
the different probability measures - P0 , P1 :
Xt is a finite-state Markov process, Xt ? {s1 , s2 , . . . , sN }.
P0 :
,
(10)
Nt is a Poisson process with a constant rate of 1, independent of Xt
Xt is a finite-state Markov process, Xt ? {s1 , s2 , . . . , sN }.
.
(11)
P1 :
Nt is a doubly-stochastic Poisson process with rate function: ?(Xt )
The following avatar of Bayes? formula (eq. 3.5 in chap. 6 of [3]), supplies a way to calculate the
conditional expectation E1 [f (Xt )|N0t ] based on P1 in terms of an expectation w.r.t. P0 ,
E0 [Lt f (Xt )|N0t ]
,
E0 [Lt |N0t ]
E1 [f (Xt )|N0t ] =
(12)
where Lt = dP1,t /dP0,t , and P0,t and P1,t are the restrictions of P0 and P1 , respectively, to the
sigma-algebra generated by {N0t , X0? }. We refer the reader to [3] for precise definitions.
Using (1) and (12) we have
pi (t) = E1 [fi (Xt )|N0t ] =
E0 [Lt fi (Xt )|N0t ]
.
E0 [Lt |N0t ]
(13)
Since the denominator is independent of i, it can be regarded as a normalization factor. Thus,
PN
4
defining ?i (t) = E0 [Lt fi (Xt )|N0t ], it follows that pi (t) = ?i (t)/ j=1 ?j (t).
Based on the above derivation, one can show ([3], chap. 6.4) that {?i (t)} obey the stochastic differential equation (SDE)
d?i (t) =
N
X
Qji ?j (t)dt + (?(si ) ? 1)?i (t)(dNt ? dt).
(14)
j=1
A SDE of the form d?(t) = a(t)dt + b(t)dNt should be interpreted as follows. If at time t, no
jump occurred in the counting process Nt , then ?(t + dt) ? ?(t) ? a(t)dt, where dt denotes an
infinitesimal time interval. If a jump occurred at time t then ?(t + dt) ? ?(t) ? a(t)dt + b(t). Since
the jump locations are random, ?(t) is a stochastic process, hence the term SDE.
Now, this derivation can be generalized to the case where there are M observation processes
(1)
(2)
(M )
with different rate functions ?1 (Xt ), ?2 (Xt ), . . . , ?M (Xt ). In this case the
Nt , Nt , . . . , Nt
differential equations for the non-normalized posterior probabilities is
d?i (t) =
N
X
j=1
Qji ?j (t)dt +
M
X
(k)
(?k (si ) ? 1)?i (t)(dNt
? dt)
(15)
k=1
P
(k)
(k)
Recalling that Nt is a counting process, namely dNt /dt = n ?(t ? tkn ), we obtain (2), where
tkn is the arrival time of the n-th event in the k-th observation process.
7
5
Discussion
In this work we have introduced a linear recurrent neural network model capable of exactly implementing Bayesian state estimation and prediction from input spike trains in real time. The framework
is mathematically rigorous and requires few assumptions, is naturally formulated in continuous time,
and is based directly on spike train inputs, thereby sacrificing no temporal resolution. The setup is
ideally suited to the integration of several sensory modalities, and retains its optimality in this setting
as well. The linearity of the system renders an analytic solution possible, and a full characterization
in terms of a-priori knowledge and online sensory input. This framework sets the stage for many
possible extensions and applications, of which we mention several examples. (i) It is important
to find a natural mapping between the current abstract neural model and more standard biological neural network models. One possible approach was mentioned in Section 2, but other options
are possible and should be pursued. Additionally, the implementation of the estimation network
(namely, the variables ?i (t)) using realistic spiking neurons is still open. (ii) At this point the matrix
Q in (4) is assumed to be known. Combining approaches to learning Q and adapting the tuning
curves ?k in real time will lend further plausibility and robustness to the system. (iii) The present
framework, based on doubly stochastic Poisson processes, can be extended to more general point
processes, using the filtering framework described in [10]. (iv) Currently, each world state is represented by a single neuron (a grandmother cell). This is clearly a non-robust representation, and it
would be worthwhile to develop more distributed and robust representations. Finally, the problem
of experimental verification of the framework is a crucial step in future work.
Acknowledgments The authors are grateful to Rami Atar his helpful advice on nonlinear filtering.
References
[1] J.M. Beck and A. Pouget. Exact inferences in a neural implementation of a hidden markov
model. Neural Comput, 19(5):1344?1361, 2007.
[2] R. Ben-Yishai, R.L. Bar-Or, and H. Sompolinsky. Theory of orientation tuning in visual cortex.
Proc Natl Acad Sci U S A, 92(9):3844?8, Apr 1995. 542.
[3] P. Br?emaud. Point Processes and Queues: Martingale Dynamics. Springer, New York, 1981.
[4] U.T. Eden, L.M. Frank, V. Solo, and E.N. Brown. Dynamic analysis of neural encoding by
point process adaptive filtering. Neural Computation, 16:971?998, 2004.
[5] G.R. Grimmett and D.R. Stirzaker. Probability and Random Processes. Oxford University
Press, third edition, 2001.
[6] A.H. Jazwinsky. Stochastic Processes and Filtering Theory. Academic Press, 1970.
[7] H.J. Kushner. Dynamical equations for optimal nonlinear filtering. J. Differential Equations,
3:179?190, 1967.
[8] R.P.N. Rao. Bayesian computation in recurrent neural circuits. Neural Comput, 16(1):1?38,
2004. 825.
[9] R.P.N. Rao. Neural models of Bayesain belief propagation. In K. Doya, S. Ishii, A. Pouget,
and R. P. N. Rao, editors, Bayesian Brain, chapter 11. MIT Press, 2006.
[10] A. Segall, M. Davis, and T. Kailath. Nonlinear filtering with counting observations. IEEE
Tran. Information Theory,, 21(2):143?149, 1975.
[11] S. Shoham, L.M. Paninski, M.R. Fellows, N.G. Hatsopoulos, J.P. Donoghue, and R.A. Norman. Statistical encoding model for a primary motor cortical brain-machine interface. IEEE
Trans Biomed Eng., 52(7):1312?22, 2005.
[12] D. L. Snyder. Filtering and detection for doubly stochastic Poisson processes. IEEE Transactions on Information Theory, IT-18:91?102, 1972.
[13] N. Twum-Danso and R. Brockett. Trajectory estimation from place cell data. Neural Netw,
14(6-7):835?844, 2001.
[14] W.M. Wonham. Some applications of stochastic differential equations to optimal nonlinear
filtering. J. SIAM Control, 2(3):347?369, 1965.
[15] M. Zakai. On the optimal filtering of diffusion processes. Z. Wahrscheinlichkeitheorie verw
Gebiete, 11:230?243, 1969.
8
| 3299 |@word open:1 d2:1 sensed:1 eng:1 p0:12 thereby:4 mention:1 initial:2 selecting:1 interestingly:1 current:3 nt:10 si:22 yet:1 written:1 realistic:1 tailoring:1 shape:2 analytic:1 motor:1 update:1 implying:2 pursued:1 xk:1 provides:2 characterization:4 statedependent:1 ron:1 location:3 mathematical:5 differential:10 supply:1 qij:4 incorrect:2 doubly:5 x0:1 inter:1 expected:2 behavior:3 p1:12 examine:2 multi:2 brain:2 decreasing:1 chap:2 actual:1 window:1 considering:1 estimating:1 moreover:1 linearity:2 circuit:2 medium:1 israel:1 what:2 cm:9 interpreted:2 sde:3 developed:2 temporal:5 fellow:1 every:1 attenuation:1 ti:6 exactly:1 demonstrates:1 wrong:3 control:2 appear:1 before:1 local:1 severely:1 switching:1 acad:1 encoding:2 ak:1 oxford:1 initiated:1 firing:6 becoming:2 approximately:1 solely:1 quantified:1 wonham:2 qii:3 acknowledgment:1 responsible:1 spontaneously:1 lost:1 irregularity:1 sidering:1 significantly:2 adapting:1 convenient:1 shoham:1 word:1 confidence:3 get:1 context:1 influence:1 seminal:1 restriction:1 map:2 straightforward:1 starting:1 resolution:1 simplicity:1 pouget:3 estimator:4 examines:1 regarded:1 his:1 population:1 classic:1 avatar:1 exact:7 us:1 element:3 trend:1 continues:2 observed:4 bottom:1 electrical:1 calculate:1 s1i:1 sompolinsky:1 decrease:1 inhibit:1 movement:3 removed:2 highest:1 mentioned:1 ramus:1 environment:7 hatsopoulos:1 complexity:1 ideally:1 dynamic:15 grateful:1 algebra:1 multimodal:1 easily:2 joint:1 differently:1 various:1 tx:1 represented:2 chapter:1 derivation:5 train:11 artificial:1 y0t:3 whose:1 heuristic:1 larger:1 reconstruct:1 ability:1 reshaped:2 noisy:3 itself:1 online:1 interplay:1 advantage:1 tran:1 interaction:2 maximal:3 relevant:2 combining:1 rapidly:1 omer:1 fired:2 forth:1 regularity:2 requirement:2 extending:1 leave:1 ben:1 object:11 spent:1 derive:2 recurrent:3 ac:1 verw:1 develop:1 ij:2 received:1 progress:1 eq:1 strong:1 implemented:5 predicted:1 implies:1 quantify:1 differ:2 direction:6 correct:1 filter:1 stochastic:9 viewing:1 enable:1 implementing:4 require:1 suffices:1 biological:4 mathematically:3 extension:2 sufficiently:1 exp:3 mapping:1 tor:1 major:1 early:1 omitted:1 estimation:11 diminishes:1 proc:1 currently:1 sensitive:1 create:1 tool:1 mit:1 clearly:3 sensor:1 gaussian:5 modified:2 rather:4 pn:1 varying:1 derived:1 vk:1 mainly:2 rigorous:4 ishii:1 helpful:1 inference:2 dependent:5 entire:1 brockett:1 hidden:2 biomed:1 compatibility:1 issue:2 orientation:1 denoted:3 priori:4 enrich:1 smoothing:1 integration:2 spatial:1 special:1 field:2 construct:1 intriguingly:1 yonina:2 identical:1 represents:4 future:5 others:1 simplify:1 inherent:1 employ:2 retina:1 few:1 simultaneously:4 beck:2 consisting:2 fire:1 recalling:1 detection:1 expenditure:1 evaluation:1 arrives:2 natl:1 yishai:1 implication:1 solo:1 capable:1 jumping:1 iv:2 logarithm:1 haifa:1 sacrificing:1 e0:7 theoretical:1 minimal:1 uncertain:1 increased:1 formalism:1 rao:5 markovian:1 s2j:1 cover:1 measuring:1 retains:1 applicability:2 addressing:1 subset:1 technion:2 delay:1 too:2 optimally:1 obeyed:1 dp0:3 considerably:1 synthetic:1 siam:1 sequel:1 probabilistic:1 receiving:1 decoding:1 continuously:1 connectivity:2 q11:1 choose:1 possibly:1 external:1 creating:1 derivative:1 leading:3 sec:9 dnt:4 afferent:1 depends:1 multiplicative:1 break:1 analyze:1 red:2 bayes:1 option:1 capability:1 elaborated:1 contribution:2 il:1 accuracy:2 yield:1 correspond:1 modelled:1 bayesian:7 trajectory:5 dp1:2 strongest:1 synaptic:1 definition:2 infinitesimal:3 energy:1 colleague:1 frequency:1 involved:1 obvious:1 naturally:1 n0t:12 static:1 auditory:1 proved:1 recall:1 knowledge:6 zakai:3 back:1 ta:1 higher:2 dt:11 modal:2 response:2 formulation:1 done:1 though:2 strongly:2 just:1 biomedical:1 stage:1 hand:1 nonlinear:9 propagation:1 gray:1 effect:10 brown:2 normalized:6 contain:1 requiring:1 evolution:4 hence:1 assigned:1 norman:1 deal:1 blockade:2 during:2 self:1 width:3 davis:1 cosine:1 generalized:1 presenting:1 evident:1 outline:1 demonstrate:2 ehq:2 tn:10 motion:2 l1:5 interface:1 recently:2 fi:4 endowing:1 behaves:1 spiking:6 rapidity:2 winner:1 extend:1 organism:2 interpretation:2 occurred:3 interpret:1 refer:2 blocked:2 tuning:18 pm:1 neatly:1 particle:2 dj:2 moving:2 cortex:1 depiction:1 etc:1 add:1 posterior:20 recent:1 scenario:1 seen:2 minimum:1 additional:2 determine:2 envisaged:1 period:1 ii:2 relates:4 multiple:3 full:2 unimodal:1 characterized:2 calculation:1 plausibility:1 academic:1 e1:5 controlled:1 impact:1 prediction:6 qi:2 basic:1 denominator:1 essentially:1 expectation:3 poisson:7 represent:3 normalization:3 cell:34 justified:1 interval:3 modality:11 crucial:1 incorporates:1 lump:1 ee:2 counting:4 iii:2 easy:1 affect:3 tkn:6 restrict:1 opposite:1 regarding:1 idea:1 tradeoff:1 br:1 donoghue:1 absent:1 t0:1 energetically:1 render:1 queue:1 york:1 action:1 generally:2 latency:1 clear:5 reduced:1 inhibitory:1 estimated:1 arising:3 rmeir:1 conceived:1 track:1 per:1 discrete:4 snyder:1 emaud:1 key:2 eden:1 changing:1 diffusion:2 circumventing:1 sum:1 xt2:1 uncertainty:1 respond:1 place:3 reader:2 doya:1 separation:1 putative:1 summarizes:1 distinguish:1 display:2 stirzaker:1 encountered:1 activity:13 adapted:2 strength:1 constraint:1 phrased:1 generates:1 optimality:1 attempting:1 performing:1 department:1 tv:1 according:4 belonging:1 across:1 describes:1 increasingly:1 s1:5 restricted:1 pr:6 sij:1 taken:1 computationally:1 equation:22 remains:1 mechanism:2 tractable:1 serf:1 available:1 gaussians:1 operation:1 observe:2 obey:3 worthwhile:1 grimmett:1 robustness:2 slower:2 denotes:4 remaining:1 kushner:2 include:1 top:1 graphical:2 qji:3 calculating:1 objective:2 question:3 spike:23 occurs:2 receptive:2 primary:2 costly:1 responds:1 diagonal:3 said:1 lends:1 hq:1 distance:1 sci:1 topic:1 reason:1 assuming:3 index:1 providing:1 demonstration:1 setup:2 mostly:1 relate:2 frank:1 trace:1 sigma:1 implementation:4 ski:1 neuron:10 observation:18 markov:6 finite:4 truncated:1 defining:3 extended:2 incorporated:1 precise:2 misspecification:2 introduced:3 namely:5 required:6 connection:1 distinction:1 learned:1 narrow:1 trans:1 able:3 bar:3 dynamical:5 pattern:1 summarize:1 grandmother:1 built:1 including:1 max:12 green:1 lend:1 belief:1 event:1 difficulty:1 natural:1 representing:1 sn:5 prior:3 law:2 embedded:1 loss:1 expect:2 filtering:18 aposteriori:1 shy:1 generator:4 integrate:1 agent:1 verification:1 editor:1 pi:6 excitatory:1 summary:1 placed:1 last:1 wide:1 characterizing:1 distributed:2 regard:1 curve:16 calculated:2 cortical:2 world:15 transition:6 qn:1 sensory:33 author:1 jump:3 adaptive:1 simplified:2 bm:1 historical:1 far:1 transaction:1 sj:2 approximate:2 observable:1 netw:1 incoming:1 xt1:1 assumed:3 excite:1 continuous:9 additionally:1 nature:1 robust:2 expanding:1 contributes:2 domain:2 diag:1 apr:1 main:3 s2:5 noise:1 arise:1 arrival:6 edition:1 fig:4 referred:1 advice:1 martingale:1 n:2 position:4 decoded:1 explicit:1 exponential:1 comput:2 third:1 down:1 formula:1 xt:34 specific:1 physiological:1 intrinsic:1 exists:1 illustrates:1 easier:1 suited:1 entropy:4 lt:6 simply:3 paninski:1 visual:2 phi:1 tracking:5 springer:1 corresponds:1 environmental:1 ma:1 conditional:1 identity:1 formulated:3 presentation:1 quantifying:1 marked:1 kailath:1 replace:1 change:1 hard:1 except:2 uniformly:2 acting:2 called:1 divisive:1 experimental:1 mark:2 gebiete:1 latter:1 arises:1 absolutely:1 d1:1 phenomenon:1 |
2,535 | 33 | 642
LEARNING BY STATE RECURRENCE DETECfION
Bruce E. Rosen, James M. Goodwint, and Jacques J. Vidal
University of California, Los Angeles, Ca. 90024
ABSTRACT
This research investigates a new technique for unsupervised learning of nonlinear
control problems. The approach is applied both to Michie and Chambers BOXES
algorithm and to Barto, Sutton and Anderson's extension, the ASE/ACE system, and
has significantly improved the convergence rate of stochastically based learning
automata.
Recurrence learning is a new nonlinear reward-penalty algorithm. It exploits
information found during learning trials to reinforce decisions resulting in the
recurrence of nonfailing states. Recurrence learning applies positive reinforcement
during the exploration of the search space, whereas in the BOXES or ASE algorithms,
only negative weight reinforcement is applied, and then only on failure. Simulation
results show that the added information from recurrence learning increases the learning
rate.
Our empirical results show that recurrence learning is faster than both basic failure
driven learning and failure prediction methods. Although recurrence learning has only
been tested in failure driven experiments, there are goal directed learning applications
where detection of recurring oscillations may provide useful information that reduces
the learning time by applying negative, instead of positive reinforcement.
Detection of cycles provides a heuristic to improve the balance between evidence
gathering and goal directed search.
INTRODUCflON
This research investigates a new technique for unsupervised learning of nonlinear
con trol problems with delayed feedback. Our approach is compared to both Michie and
Chambers BOXES algorithml, to the extension by Barto, et aI., the ASE (Adaptive
Search Element) and to their ASE/ACE (Adaptive Critic Element) system2 , and shows
an improved learning time for stochastically based learning automata in failure driven
tasks.
We consider adaptively controlling the behavior of a system which passes
through a sequence of states due to its internal dynamics (which are not assumed to be
known a priori) and due to choices of actions made in visited states. Such an adaptive
controller is often referred to as a learning automaton. The decisions can be
deterministic or can be made according to a stochastic rule. A learning automaton has
to discover which action is best in each circumstance by producing actions and
observing the resulting information.
This paper was motivated by the previous work of Barto, et al. to investigate
neuronlike adaptive elements that affect and learn from their environment. We were
inspired by their current work and the recent attention to neural networks and
connectionist systems, and have chosen to use the cart-pole control problem 2, to enable
a comparison of our results with theirs .
...
!
Permanent address: California State University, Stanislaus; Turlock, California.
@ American Institute of Physics 1988
643
THE CART?POLE PROBLEM
In their work on the cart-pole problem, Barto, Sutton and Anderson considered a
learning system composed of an automaton interacting with an environment. The
problem requires the automaton to balance a pole acting as an inverted pendulum hinged
on a moveable cart. The cart travels left or right along a bounded one dimensional track;
the pole may swing to the left or right about a pivot attached to the cart. The automaton
must learn to keep the pole balanced on the cart, and to keep the cart within the bounds
of the track. The parameters of the cart/pole system are the cart po~ition and velocity,
and the pole angle and angular velocity. The only actions available to the automaton are
the applications of a fixed impulsive force to the cart in either right or left direction; one
of these actions must be taken.
This balancing is an extremely difficult problem if there is no a priori knowledge
of the system dynamics, if these dynamics change with time, or if there is no
preexisting controller that can be imitated (e.g. Widrow and Smith's3 ADALINE
controller). We assumed no a priori knowledge of the dynamics nor any preexisting
controller and anticipate that the system will be able to deal with any changing
dynamics.
Numerical simulations of the cart-pole solution via recurrence learning show
substantial improvement over the results of Barto et aI., and of Michie and Chambers,
as is shown in figure 1. The algorithms used, and the results shown in figure 1, will
be discussed in detail below.
500000 ~------------------------------
T'unc
Until
Fai1\R
100000
25
so
75
110
Trial No.
Figure 1: Perfonnance of the ASE, ASE/ACE, Constant Recurrence (HI) and
Shon Recurrence (H2) Algorithms.
THE GENERAL PROBLEM: ASSIGNMENT OF CREDIT
The cart-pole problem is one of a class of problems known as "credit
assignment"4, and in particular temporal credit assignment. The recurrence learning
algorithm is an approach to the general temporal credit assignment problem. It is
characterized by seeking to improve learning by making decisions about early actions.
The goal is to find actions responsible for improved or degraded perfonnance at a much
later time.
An example is the bucket brigade algorithmS. This is designed to assign credit to
rules in the system according to their overall usefulness in attaining their goals. This is
done by adjusting the strength value (weight) of each rule. The problem is of
modifying these strengths is to permit rules activated early in the sequence to result in
successful actions later.
644
Samuels considered the credit assignment problem for his checkers playing
program6 . He noted that it is easy enough to credit the rules that combine to produce a
triple jump at some point in the game; it is much harder to decide which rules active
earlier were responsible for changes that made the later jump possible.
State recurrence learning assigns a strength to an individual rule or action and
modifies that action's strength (while the system accumulates experience) on the basis
of the action's overall usefulness in the situations in which it has been invoked. In this
it follows the bucket brigade paradigm of Holland.
PREVIOUS WORK
The problems of learning to control dynamical systems have been studied in the
past by Widrow and Smith 3, Michie and Chambers!, Barto, Sutton, and Anderson 2,
and Conne1l 7 . Although different approaches have been taken and have achieved
varying degrees of success, each investigator used the cart/pole problem as the basis for
empirically measuring how well their algorithms work.
Michie and Chambers l built BOXES, a program that learned to balance a pole on
a cart. The BOXES algorithm choose an action that had the highest average time until
failure. After 600 trials (a trial is a run ending in eventual failure or by some time limit
expiration), the program was able to balance the pole for 72,000 time steps. Figure 2a
describes the BOXES learning algorithm. States are penalized (after a system failure)
according to recency. Active states immediately preceding a system failure are
punished most.
Barto, Sutton and Anderson 2 used two neuronlike adaptive elements to solve the
control problem. Their ASE/ACE algorithm chose the action with the highest
probability of keeping the pole balanced in the region, and was able to balance the pole
for over 60,000 time steps before completion of the lOOth trial.
Figure 2a and 2b: The BOXES and ASE/ACE (Associative Search Elelement Adpative Critic Element) algorithms
Figure 2a shows the BOXES (and ASE) learning algorithm paradigm When the
automaton enters a failure state (C), all states that it has traversed (shaded rectangles)
are punished, although state B is punished more than state A. (Failure states are those
at the edges of the diagram.) Figure 2b describes the ASE/ACE learning algorithm. If
a system failure occurs before a state's expected failure time, the state is penalized. If a
system failure occurs after its expected failure time, the state is rewarded. State A is
penalized because a failure occurred at B sooner than expected. State A's expected
645
failure time is the time for the automaton to traverse from state A to failure point C.
When leaving state A, the weights are updated if the new state's expected failure time
differs from that of state A.
Anderson 8 used a connectionist system to learn to balance the pole. Unlike the
previous experiments, the system did provide well-chosen states a priori. On the
average, 10,000 trials were necessary to learn to balance the pole for 7000 time steps.
Connell and Utgoff'7 developed an approach that did not depend on partitioning
the state space into discrete regions. They used Shepard's function 9,l0 to interpolate
the degree of desirability of a cart-pole state. The system learned the control task after
16 trials. However, their system used a knowledge representation that had a priori
information about the system.
O'n-lER RELATED WORK
Klopfll proposed a more neurological class of differential learning mechanisms
that correlates earlier changes of inputs with later changes of outputs. The adaptation
formula used multiplies the change in outputs by the weighted sum of the absolute
value of the t previous inputs weights (~Wj)' the t previous differences in inputs (~Xj)'
and the t previous time coefficients (c/
Sutton's temporal differences (TD)12 approach is one of a class of adaptive
prediction methods. Elements of this class use the sum of previously predicted output
values multiplied by the gradient and an exponentially decaying coefficient to modify
the weights. Barto and Sutton 13 used temporal differences as the underlying learning
procedure for classical conditioning.
THERECURRENCELE~G~HOD
DEFINITIONS
A state is the set of values (or ranges) of parameters sufficient to specify the
instantaneous condition of the system.
The input decoder groups the environmental states into equivalence classes:
elements of one class have identical system responses. Every environmental input is
mapped into one of n input states. (All further references to "states" assumes that the
input values fall into the discrete ranges determined by the decoder, unless otherwise
specified. )
States returned to after visiting one or more alternate states recur.
An action causes the modification of system parameters, which may change the
system state. However, no change of state need occur, since the altered parameter
values may be decoded within the same ranges.
A weight, wet), is associated with each action for each state, with the probability
of an allowed action dependent on the current value of its weight.
A rule determines which of the allowable actions is taken. The rule is not
deterministic. It chooses an action stochastically, based on the weights.
Weight changes, ~w(t), are made to reduce the likelihood of choosing an action
which will cause an eventual failure. These changes are made based on the idea that the
previous action of an element, when presented with input x(t), had some influence in
causing a similar pattern to occur again. Thus, weight changes are made to increase the
likelihood that an element produces the same action f(t) when patterns similar to x(t)
occur in the future.
646
For example, consider the classic problem of balancing a pole on a moving cart.
The state is specified by the positions and velocities of both the cart and the pole. The
allowable actions are fixed velocity increments to the right or to the left, and the rule
determines which action to take, based on the current weights.
THE ALGORITHM
The recurrence learning algorithm presented here is a nonlinear reward-penalty
method 14. Empirical results show that it is successful for stationary environments. In
contrast to other methods, it also may be applicable to nonstationary environments'.
Our efforts have been to develop algorithms that reward decision choices that lead the
controller/environment to quasi-stable cycles that avoid failure (such as limit cycles,
converging oscillations and absorbing points).
Our technique exploits recurrence information obtained during learning trials.
The system is rewarded upon return to a previous state, however weight changes are
only permitted when a state transition occurs. If the system returns to a state, it has
avoided failure. A recurring state is rewarded. A sequence of recurring states can be
viewed as evidence for a (possibly unstable) cycle. The algorithm forms temporal
"cause and effect" associations.
To optimize performance, dynamic search techniques must balance between
choosing a search path with known solution costs, and exploring new areas of the
search space to find better or cheaper solutions. This is known as the two armed bandit
problem l5 , i.e. given a two handed slot machine with one arm's observed reward
probabilities higher than the other, one should not exclude playing with the arm with
the lesser payoff. Like the ASE/ACE system, recurrence learning learns while
searching, in contrast to the BOXES and ASE algorithms which learn only upon
failure.
RANGE DECODING
In our work, as in Barto and others, the real valued input parameters are analyzed
as members of ranges. This reduces computing resource demands. Only a limited
number of ranges are allowed for each parameter. It is possible for these ranges to
overlap, although this aspect of range decoding is not discussed in this paper, and the
ranges were considered nonoverlapping. When the parameter value falls into one of the
ranges that range is active. The specification of a state consists of one of the active
ranges for each of the parameters. If the ranges do not overlap, then the set of
parameter values specify one unique state; otherwise the set of parameter values may
specify several states. Thus, the parameter values at any time determine one or several
active states Si from the set of n possible states.
The value of each environmental parameter falls into one of a number of ranges,
which may be different for different parameters. A state is specified by the active range
for each parameter.
The set of input parameter values are decoded into one (or more) of n ranges Si'
0<= i <= n. For this problem, boolean values are used to describe the activity level of
a state Si. The activity value of a state is 1 if the state is active, or 0 if it is inactive.
ACfION DECISIONS
Our model is the same as that of the BOXES and ASE/ACE systems, where only
one input (and state) is active at any given time. All states were nonoverlapping and
mutually exclusive, although there was no reason to preclude them from overlapping
647
other than for consistency with the two previous models. In the ASE/ACE system and
in ours as well, the output decision rule for the controller is based on the weighted sum
of its inputs plus some stochastic noise. The action (output) decision of the controller
is either + 1 or -1, as given by:
( 1)
where
f( )
z
= [+-llfz<O
1 .i f z ~ 0 ]
(2)
and noise is a real randomly (Gaussian) distributed value with some mean 11 and
variance 0'2. An output, fez), for the car/pole controller is interpreted as a push to the
left if fez) =-lor to the right if fez) = +1.
RECURRENCELE~G
The goal of the recurrence learning algorithm is to avoid failure by moving toward
states that are part of cycles if such states exist, or quasi-stable oscillations if they
don't. This concept can be compared to juggling. As long as all the balls are in the air,
the juggler is judged a success and rewarded. No consideration is given to whether the
balls are thrown high or low, left or right; the controller, like the juggler, tries for the
most stable cycles. Optimum performance is not demanded from recurrence learning.
Two heuristics have been devised. The fundamental basis of each of them is to
reward a state for being repeatedly visited (or repeatedly activated). The first heuristic
is to reward a state when it is revisited, as part of a cycle in which no failure had
occurred. The second heuristic augments the first by giving more reward to states
which panicipate in shorter cycles. These heuristics are discussed below in detail.
HEURISTIC HI: If a state has been visited more than once during one trial,
reward it by reinforcing its weight.
RATIONALE
This heuristic assumes that states that are visited more than once have been part of
a cycle in which no failure had occurred. The action taken in the previous visit is
assumed to have had some influence on the recurrence. By reinforcing a weight upon
state revisitation, it is assumed to increase the likelihood that the cycle will occur again.
No assumptions are made as to whether other states were responsible for the cycle.
RESTRICfION
An action may not immediately cause the environment to change to a different
state. There may be some delay before a transition, since small changes of parameters
may be decoded into the same input ranges, and hence the same state. This inertia is
incorporated into the heuristics. When the same state appears twice in succession, its
weight is not reinforced, since that would assume that the action, rather than inertia,
directly caused the state's immediate recurrence.
648
THE RECURRENCE EQUATIONS
The recurrence learning equations stem in part from the weight alteration formula
used in the ASE system. The weight of a state is a sum of its previous weight, and the
product of the learning rate (a), the reward (r), and the state's eligibility (e).
ret)
E
{-I,O}
(3)
The eligibility index e/t) is an exponentially decaying trace function.
(4)
where O<=P<=I, Xi
E
{0,1}, and Yi
E
{-I,I}. The output value Yi is the last output
decision, and P determines the decay rate.
The reward function is:
ret)
=
?
{ -1
when the system fails at time t }
otherwise
(5)
REINFORCEMENT OF CYCLES
Equations (1) through (5) describe the basic ASE system. Our algorithm extends
the weight updating procedure as follows:
(6)
The term ar(t)ei(t) is the same as in (3), providing failure reinforcement. The
term a2r2(t)e2,i(t) provides reinforcement for success. When state i is eligible (by
virtue of Xi > 0), there is a weight change by the amount: CXz multiplied by the reward
value, r2(t), and the current eligibility e2,i(t). For simplicity, the reward value, r2(t),
may be taken to be some positive constant, although it need not be; any environmental
feedback, yielding a reinforcement value as a function of time could be used instead.
The second eligibility function e2,i(t) yields one of three constant values for HI: -P2' 0,
or P2 according to formula (7) below:
if t-ti,last = 1 or ti,last =
otherwise
?}
?
(7)
?
where ti,last is the last time that state was active. If a state has not previously been
active (i.e. xi(t) = for all t) then ti,last=O. As the formula shows, e2,i(t) = if the state
has not been previously visited or if no state transition occurred in the last time step;
otherwise, e2,i(t) = P2Xj(t)y(ti,last)?
The direction (increase or decrease) of the weight change due
(6) is that of the last action taken, y(ti,last).
to
the final term in
649
Heuristic HI is called constant recurrence learning because the eligibility function
is designed to reinforce any cycle.
HEURISTIC H2: Reward a short cycle more than a longer one.
Heuristic 82 is called short recurrence learning because the eligibility function is
designed to reinforce shorter cycle more than longer cycles.
REINFORCEMENT OF SHORTER CYCLES
The basis of the second heuristic is the conjecture that short cycles converge more
easily to absorbing points than long ones, and that long cycles diverge more easily than
shorter ones, although any cycle can "grow" or diverge to a larger cycle. The
following extension to the our basic heuristic is proposed.
The formula for the recurrence eligibility function is:
o
e2,i(t)
{
=
if t-ti,last = 1 or li,last
P2
xi(t) y(ti,last) otherwise
(P2+t - ti,last)
=0 }
(8)
The current eligibility function e2/t) is similar to the previous failure eligibility
function in (7); however, e2 i(t) reinforces shorter cycles more, because the eligibility
decays with time. The value'returned from e2it) is inversely proportional to the period
of the cycle from ti,last to t. H2 reinforces converging oscillations; the term
(X.2r2(t)e2/t) in (6) ensures weight reinforcement for returning to an already visited
state.
Figure 3a and 3b: The Constant Recurrence algorithm and Short Recurrence
algorithms
Figure 3A shows the Constant Recurrence algorithm (HI). A state is rewarded
when it is reactivated by a transition from another state. In the example below. state A
is reward by a constant regardless of weather the cycle traversed states B or C. Figure
3b describes the Short Recurrence algorithm (m). A state is rewarded according to the
difference between the current time and its last activation time. Small differences are
rewarded more than large differences In the example below, state A is rewarded more
650
when the cycle (through state C) traverses the states shown by the dark heavy line
rather than when the cycle (through state B) traverses the lighter line, since state A
recurs sooner when traversing the darker line.
SIMULATION RESULTS
We simulated four algorithms: ASE, ASE/ACE and the two recurrence
algorithms. Each experiment consisted of ten runs of the cart-pole balancing task, each
consisting of 100 trials. Each trial lasted for 500,000 time steps or until the cart-pole
system failed (i.e. the pole fell or the cart went beyond the track boundaries). In an
effort to conserve cpu time, simulations were also terminated when the system achieved
two consecutive trials each lasting for over 500,000 time steps; all remaining trials were
assumed to also last 500,000 time steps. This assumption was reasonable: the resulting
weight space causes the controller to become deterministic regardless of the influence of
stochastic noise. Because of the long time require to run simulations, no attempts were
made to optimize parameters of the algorithm.
As in Bart0 2, each trial began with the cart centered, and the pole upright. No
assumptions were made as to the state space configuration, the desirability of the initial
states, or the continuity of the state space.
The first experiment consisted of failure and recurrence reward learning. The
ASE failure learning runs averaged 1578 time steps until failure after 100 trials*. Next,
the predictive ASE/ACE system was run as a comparative metric, and it was found that
this method caused the controller to average 131,297 time steps until failure; the results
are comparable to that described by Barto, Sutton and Anderson.
In the next experiment, short recurrence learning system was added to the ASE
system. Again, ten 100 trial learning session were executed. On the average, the short
recurrence learning algorithm ran for over 400,736 time steps after 100th trial, bettering
the ASE/ACE system by 205%.
In the final experiment, constant recurrence learning with the ASE system was
simulated. The constant recurrence learning eliminated failure after only 207,562 time
steps.
Figure 1 shows the ASE, ASE/ACE, Constant recurrence learning (HI) and
Short recurrence learning (H2) failure rates averaged over 10 simulation runs.
DISCUSSION
Detection of cycles provides a heuristic for the "two armed bandit" problem to
decide between evidence gathering, and goal directed search. The algorithm allows the
automaton to search outward from the cycle states (states with high probability of
revisitation) to the more unexplored search space. The rate of exploration is
proportional to the recurrence learning parameter~; as ~ is decreased, the influence
of the cycles governing the decision process also decreases and the algorithm explores
more of the search space that is not part of any cycle or oscillation path.
* However,
there was a relatively large degree of variance in the final trials. The last
10 trails (averaged over each of the 10 simulations) ranged from 607 to 15,459 time
steps until failure
651
THEFUfURE
Our future experiments will study the effects of rewarding predictions of cycle
lengths in a manner similar to the prediction of failure used by the ASE/ACE system.
The effort will be to minimize the differences of predicted time of cycles in order to
predict their period. Results of this experiment will be shown in future reports. We
hope to show that this recurrence prediction system is generally superior to either the
ASE/ACE predictive system or the short recurrence system operating alone.
CONCLUSION
This paper presented an extension to the failure driven learning algorithm based
on reinforcing decisions that cause an automaton to enter environmental states more
than once. The controller learns to synthesize the best values by reinforcing areas of
the search space that produce recurring state visitation. Cycle states, which under
normal failure driven learning algorithms do not learn, achieve weight alteration from
success. Simulations show that recurrence reward algorithms show improved overall
learning of the cart-pole task with a substantial decrease in learning time.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
D. Michie and R. Chambers, Machine Intelligence, E. Dale and D. Michie, Ed.:
(Oliver and Boyd, Edinburgh, 1968), p. 137.
A. Barto, R. Sutton, and C. Anderson, Coins Tech. Rept., No. 82-20, 1982.
B. Widrow and F. Smith, in Computer and Information Sciences, 1. Tou and
R. Wilcox Eds., (Clever Hume Press, 1964).
M. Minsky, in Proc. IRE, 49, 8, (1961) .
J. Holland, in Proc. Int. Conj., Genetic Algs. and their Appl., 1985, p. 1.
A. Samuel, IBM Journ. Res.and Dev. 3, 211, (1959)
M. Connell and P. Utgoff, in Proc. AAAl-87 (Seattle, 1987), p. 456.
C. Anderson, Coins Tech . Rept., No. 86-50: Amherst, MA. 1986.
R. Barnhill, in Mathematical Software I II, (Academic Press, 1977).
L. Schumaker, in Approximation Theory II. (Academic Press, 1976).
A. H. Klopf, in IEEE Int. Conf. Neural Networks" June 1987.
R. Sutton, GTE Tech. Rept.TR87-509.1, GTE Labs. Inc., Jan. 1987
R. Sutton and A. G. Barto, Tech . Rept. TR87-5902.2 March 1987
A. Barto and P. Anandan, IEEE Trans. SMC 15, 360 (1985).
M. Sato, K. Abe, and H. Takeda, IEEE Trans .SMC 14,528 (1984).
| 33 |@word trial:18 simulation:8 harder:1 initial:1 configuration:1 genetic:1 ours:1 past:1 current:6 si:3 activation:1 must:3 numerical:1 designed:3 stationary:1 alone:1 intelligence:1 imitated:1 smith:3 hinged:1 short:9 provides:3 ire:1 revisited:1 traverse:3 lor:1 mathematical:1 along:1 differential:1 become:1 consists:1 combine:1 manner:1 expected:5 behavior:1 nor:1 inspired:1 fez:3 td:1 cpu:1 armed:2 preclude:1 discover:1 bounded:1 underlying:1 interpreted:1 developed:1 ret:2 temporal:5 every:1 unexplored:1 ti:10 returning:1 control:5 partitioning:1 producing:1 positive:3 before:3 modify:1 rept:4 limit:2 sutton:10 accumulates:1 path:2 chose:1 plus:1 twice:1 studied:1 equivalence:1 shaded:1 adpative:1 appl:1 limited:1 smc:2 range:17 averaged:3 directed:3 unique:1 responsible:3 differs:1 procedure:2 jan:1 area:2 empirical:2 significantly:1 weather:1 boyd:1 unc:1 clever:1 judged:1 recency:1 applying:1 influence:4 optimize:2 deterministic:3 modifies:1 attention:1 regardless:2 automaton:12 simplicity:1 assigns:1 immediately:2 rule:11 his:1 classic:1 searching:1 increment:1 expiration:1 updated:1 controlling:1 lighter:1 trail:1 element:9 velocity:4 synthesize:1 conserve:1 updating:1 michie:7 observed:1 enters:1 moveable:1 region:2 wj:1 cycle:33 ensures:1 went:1 decrease:3 highest:2 ran:1 balanced:2 substantial:2 environment:6 utgoff:2 reward:16 dynamic:6 trol:1 depend:1 predictive:2 upon:3 basis:4 po:1 easily:2 describe:2 preexisting:2 choosing:2 ace:15 heuristic:14 solve:1 valued:1 larger:1 otherwise:6 ition:1 final:3 associative:1 sequence:3 schumaker:1 product:1 adaptation:1 causing:1 adaline:1 achieve:1 takeda:1 los:1 seattle:1 convergence:1 optimum:1 produce:3 comparative:1 widrow:3 completion:1 develop:1 p2:4 predicted:2 direction:2 modifying:1 stochastic:3 exploration:2 centered:1 enable:1 require:1 assign:1 anticipate:1 traversed:2 extension:4 exploring:1 considered:3 credit:7 normal:1 predict:1 early:2 consecutive:1 proc:3 travel:1 applicable:1 wet:1 punished:3 visited:6 weighted:2 hope:1 gaussian:1 desirability:2 rather:2 avoid:2 varying:1 barto:13 l0:1 june:1 improvement:1 recurs:1 likelihood:3 lasted:1 tech:4 contrast:2 dependent:1 bandit:2 journ:1 quasi:2 overall:3 priori:5 multiplies:1 once:3 eliminated:1 identical:1 unsupervised:2 future:3 rosen:1 connectionist:2 others:1 report:1 randomly:1 algs:1 composed:1 interpolate:1 individual:1 delayed:1 cheaper:1 minsky:1 consisting:1 attempt:1 thrown:1 detection:3 neuronlike:2 investigate:1 analyzed:1 yielding:1 activated:2 oliver:1 edge:1 necessary:1 experience:1 conj:1 shorter:5 perfonnance:2 unless:1 traversing:1 sooner:2 re:1 handed:1 earlier:2 boolean:1 impulsive:1 ar:1 dev:1 measuring:1 assignment:5 cost:1 pole:26 usefulness:2 delay:1 successful:2 chooses:1 adaptively:1 fundamental:1 explores:1 amherst:1 recur:1 l5:1 physic:1 rewarding:1 decoding:2 diverge:2 again:3 choose:1 possibly:1 stochastically:3 conf:1 american:1 return:2 li:1 exclude:1 attaining:1 nonoverlapping:2 alteration:2 coefficient:2 int:2 inc:1 permanent:1 caused:2 later:4 try:1 lab:1 observing:1 pendulum:1 decaying:2 bruce:1 minimize:1 air:1 degraded:1 variance:2 succession:1 reinforced:1 yield:1 barnhill:1 ed:2 definition:1 failure:38 james:1 e2:9 associated:1 con:1 adjusting:1 knowledge:3 car:1 appears:1 higher:1 permitted:1 specify:3 improved:4 response:1 done:1 box:10 anderson:8 angular:1 governing:1 until:6 ei:1 nonlinear:4 overlapping:1 continuity:1 effect:2 concept:1 consisted:2 ranged:1 swing:1 hence:1 deal:1 during:4 game:1 recurrence:40 eligibility:10 noted:1 samuel:2 allowable:2 invoked:1 instantaneous:1 consideration:1 began:1 superior:1 absorbing:2 empirically:1 attached:1 brigade:2 shepard:1 exponentially:2 discussed:3 he:1 occurred:4 conditioning:1 theirs:1 association:1 ai:2 enter:1 consistency:1 session:1 had:6 moving:2 stable:3 specification:1 longer:2 operating:1 recent:1 driven:5 rewarded:8 success:4 yi:2 inverted:1 anandan:1 preceding:1 determine:1 paradigm:2 converge:1 period:2 ii:2 reduces:2 stem:1 faster:1 characterized:1 reactivated:1 academic:2 long:4 devised:1 ase:27 visit:1 tr87:2 prediction:5 converging:2 basic:3 controller:12 circumstance:1 metric:1 achieved:2 whereas:1 decreased:1 diagram:1 grow:1 leaving:1 unlike:1 checker:1 pass:1 fell:1 cart:23 member:1 nonstationary:1 easy:1 enough:1 affect:1 xj:1 reduce:1 idea:1 lesser:1 angeles:1 pivot:1 inactive:1 whether:2 motivated:1 effort:3 penalty:2 reinforcing:4 juggling:1 returned:2 cause:6 action:28 repeatedly:2 useful:1 generally:1 juggler:2 amount:1 outward:1 dark:1 ten:2 augments:1 exist:1 s3:1 jacques:1 track:3 reinforces:2 discrete:2 group:1 visitation:1 four:1 changing:1 rectangle:1 sum:4 run:6 angle:1 extends:1 eligible:1 decide:2 reasonable:1 oscillation:5 decision:10 investigates:2 comparable:1 bound:1 hi:6 activity:2 sato:1 strength:4 occur:4 software:1 aspect:1 extremely:1 connell:2 relatively:1 conjecture:1 according:5 alternate:1 ball:2 march:1 describes:3 making:1 modification:1 lasting:1 tou:1 gathering:2 bucket:2 taken:6 resource:1 mutually:1 previously:3 equation:3 mechanism:1 available:1 permit:1 vidal:1 multiplied:2 chamber:6 coin:2 assumes:2 remaining:1 exploit:2 giving:1 classical:1 seeking:1 added:2 already:1 occurs:3 exclusive:1 visiting:1 gradient:1 reinforce:3 mapped:1 simulated:2 decoder:2 unstable:1 reason:1 toward:1 length:1 index:1 ler:1 providing:1 balance:8 difficult:1 executed:1 trace:1 negative:2 immediate:1 situation:1 payoff:1 incorporated:1 interacting:1 abe:1 specified:3 california:3 learned:2 trans:2 address:1 able:3 recurring:4 beyond:1 below:5 dynamical:1 pattern:2 program:2 built:1 overlap:2 force:1 arm:2 improve:2 altered:1 inversely:1 rationale:1 proportional:2 triple:1 h2:4 degree:3 sufficient:1 playing:2 critic:2 balancing:3 heavy:1 ibm:1 penalized:3 last:18 keeping:1 institute:1 fall:3 absolute:1 distributed:1 edinburgh:1 feedback:2 boundary:1 ending:1 transition:4 dale:1 inertia:2 made:9 reinforcement:9 adaptive:6 jump:2 avoided:1 correlate:1 keep:2 active:10 assumed:5 xi:4 don:1 search:12 demanded:1 learn:6 ca:1 did:2 terminated:1 noise:3 allowed:2 referred:1 darker:1 fails:1 position:1 decoded:3 learns:2 formula:5 system2:1 r2:3 decay:2 virtue:1 evidence:3 hod:1 push:1 demand:1 failed:1 neurological:1 shon:1 holland:2 applies:1 environmental:5 determines:3 ma:1 slot:1 goal:6 viewed:1 eventual:2 change:15 determined:1 upright:1 acting:1 gte:2 called:2 revisitation:2 klopf:1 internal:1 investigator:1 tested:1 wilcox:1 |
2,536 | 330 | The Recurrent Cascade-Correlation Architecture
Scott E. Fahlman
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
Recurrent Cascade-Correlation CRCC) is a recurrent version of the CascadeCorrelation learning architecture of Fah Iman and Lebiere [Fahlman, 1990]. RCC
can learn from examples to map a sequence of inputs into a desired sequence of
outputs. New hidden units with recurrent connections are added to the network
as needed during training. In effect, the network builds up a finite-state machine
tailored specifically for the current problem. RCC retains the advantages of
Cascade-Correlation: fast learning, good generalization, automatic construction
of a near-minimal multi-layered network, and incremental training.
1 THE ARCHITECTURE
Cascade-Correlation [Fahlman, 1990] is a supervised learning architecture that builds a
near-minimal multi-layer network topology in the course of training. Initially the network
contains only inputs, output units, and the connections between them. This single layer of
connections is trained (using the Quickprop algorithm [Fahlman, 1988]) to minimize the
error. When no further improvement is seen in the level of error, the network's performance
is evaluated. If the error is small enough, we stop. Otherwise we add a new hidden unit to
the network in an attempt to reduce the residual error.
To create a new hidden unit, we begin with a pool of candidate units, each of which receives
weighted connections from the network's inputs and from any hidden units already present
in the net. The outputs of these candidate units are not yet connected into the active network.
Multiple passes through the training set are run, and each candidate unit adjusts its incoming
weights to maximize the correlation between its output and the residual error in the active
net. When the correlation scores stop improving, we choose the best candidate, freeze its
incoming weights, and add it to the network. This process is called "tenure." After tenure,
190
The Recurrent Cascade-Correlation Architecture
a unit becomes a permanent new feature detector in the net. We then re-train all the weights
going to the output units, including those from the new hidden unit. This process of adding
a new hidden unit and re-training the output layer is repeated until the error is negligible or
we give up. Since the new hidden unit receives connections from the old ones. each hidden
unit effectively adds a new layer to the net.
Cascade-correlation eliminates the need for the user to guess in advance the network's
size, depth, and topology. A reasonably small (though not minimal) network is built
automatically. Because a hidden-unit feature detector, once built, is never altered or
cannibalized, the network can be trained incrementally. A large data set can be broken
up into smaller "lessons," and feature-building will be cumulative. Cascade-Correlation
learns much faster than backprop for several reasons: First only a single layer of weights
is being trained at any given time. There is never any need to propagate error information
backwards through the connections, and we avoid the dramatic slowdown that is typical
when training backprop nets with many layers. Second, this is a "greedy" algorithm: each
new unit grabs as much of the remaining error as it can. In a standard backprop net. the all
the hidden units are changing at once, competing for the various jobs that must be done-a
slow and sometimes unreliable process.
Cascade-correlation, like back-propagation and other feed-forward architectures, has no
short-term memory in the network. The outputs at any given time are a function only of
the current inputs and the network's weights. Of course, many real-world tasks require the
recognition of a sequence of inputs and, in some cases, the corresponding production of a
sequence of outputs. A number of recurrent architectures have been proposed in response
to this need. Perhaps the most widely used, at present. is the Elman model [Elman, 1988].
which assumes that the network operates in discrete time-steps. The outputs of the network's
hidden units at time t are fed back for use as additional network inputs at time-step t+ 1. These
additional inputs can be thought of as state-variables whose contents and interpretation are
determined by the evolving weights of the network. In effect, the network is free to choose
its own representation of past history in the course of learning.
Recurrent Cascade-Correlation CRCC) is an architecture that adds Elman-style recurrent
operation to the Cascade-Correlation architecture. However, some changes were needed in
order to make the two models fit together. In the original Elman architecture there is total
connectivity between the state variables (previous outputs of hidden units) and the hidden
unit layer. In Cascade-Correlation, new hidden units are added one by one, and are frozen
once they are added to the network. It would violate this concept to insert the outputs from
new hidden units back into existing hidden units as new inputs. On the other hand, the
network must be able to form recurrent loops if it is to retain state for an indefinite time.
The solution we have adopted in RCC is to augment each candidate unit with a single
weighted self-recurrent input that feeds back that unit's own output on the previous timestep. That self-recurrent link is trained along with the unit's other input weights to maximize
the correlation of the candidate with the residual error. If the recurrent link adopts a strongly
positive value, the unit will function as a flip-flop, retaining its previous state unless the
other inputs force it to change; if the recurrent link adopts a negative value, the unit will
tend to oscillate between positive and negative outputs on each time-step unless the other
inputs hold it in place; if the recurrent weight is near zero, then the unit will act as a gate
of some kind. When a candidate unit is added to the active network as a new hidden unit.
the self-recurrent weight is frozen. along with all the other weights. Each new hidden unit
is in effect a single state variable in a finite-state machine that is built specifically for the
191
192
Fahlman
task at hand. In this use of self-recurrent connections only, the RCC model resembles the
"Focused Back-Propagation" algorithm of Mozer[Mozer, 1988].
The output, V(t), of each self-recurrent unit is computed as follows:
V(t) = tr
(~li(t)Wi + V(t - l)W')
where q is some non-linear squashing function applied to the weighted sum of inputs 1 plus
the self-weight, W,,' times the previous output. In the studies described here, q is always the
hyperbolic tangent or "symmetric sigmoid" function, with a range from -1 to +1. During the
candidate training phase, we adjust the weights W; and w" for each unit so as to maximize
its correlation score. This requires computing the derivative of V(t) with respect to these
weights:
8V(t)/Ow; = q'(t) (1;(t) + w" 8V(t - 1)/Ow;)
8V(t)/8w" = q'(t) (V(t - 1) +w" 8V(t - 1)/Ow,,)
The rightmost term reflects the influence of the weight in question on the unit's previous
state. Since we computed 8V(t - 1)/Ow on the previous time-step, we can just save this
value and use it in the current step. So the recurrent version of the learning algorithm
requires us to store a single additional number for each candidate weight, plus V(t - 1) for
each unit. At t = 0 we assume that the unit's previous value and previous derivatives are
all zero.
As an aside, the usual formulation for Elman networks treats the hidden units' previous
values as independent inputs, ignoring the dependence of these previous values on the
weights being adjusted. In effect, the rightmost terms in the above equations are being
dropped, though they are not negligible in general. This rough approximation apparently
causes little trouble in practice, but it might explain the instability that some researchers
have reported when Elman nets are run with aggressive second-order learning procedures
such as quickprop. The Mozer algorithm does take these extra terms into account.
2 EMPffiICAL RESULTS: FINITE?STATE GRAMMAR
Figure la shows the state-transition diagram for a simple finite-state grammar, called
the Reber grammar, that has been used by other researchers to investigate learning and
generalization in recurrent neural networks. To generate a "legal" string of tokens from
this grammar, we begin at the left side of the graph and move from state to state, following
the directed edges. When an edge is traversed, the associated letter is added to the string.
Where two paths leave a single node, we choose one at random with equal probability. The
resulting string always begins with a "B" and ends with an "E". Because there are loops
in the graph, there is no bound on the length of the strings; the average length about eight
letters. An example of a legal string would be "BTSSXXVPSE".
Cleeremans, Servan-Schreiber, and McClelland [Cleeremans, 1989] showed that an Elman
network can learn this grammar if it is shown many different strings produced by the
The Recurrent Cascade-Correlation Architecture
E
../
~
REBER
GRAMMAR
E
REBER
GRAMMAR
Figure 1: State transition diagram for the Reber grammar (left) and for the embedded Reber
grammar (right).
grammar. The internal state of the network is zeroed at the start of each string. The letters
in the string are then presented sequentially at the inputs of the network, with a separate
input connection for each of the seven letters. The network is trained to predict the next
character in the string by turning on one of the seven outputs. The output is compared to
the true successor and network attempts to minimize the resulting errors.
When there are two legal successors from a given state, the network will never be able to
do a perfect job of prediction. During training, the net will see contradictory examples,
sometimes with one successor and sometimes the other. In such cases, the net will eventually
learn to partially activate both legal outputs. During testing, a prediction is considered
correct if the two desired outputs are the two with the largest values.
This task requires generalization in the presence of considerable noise. The rules defining
the grammar are never presented-only examples of the grammar's output. Note that if the
network can perform the prediction task perfectly, it can also be used to determine whether
a string is a legal output of the grammar. Note also that the successor letter(s) cannot be
determined from the current input alone; some memory of of the network's state or past
inputs is essential.
Cleeremans et ale report that a fixed-topology Elman net with three hidden units can learn
this task after 60,000 distinct training strings have been presented, each used only once. A
larger network with 15 hidden units required only 20,000 training strings. These were the
best results obtained, not averages over a number of runs.
RCC was given the same problem, but using a fixed set of 128 training strings, presented
repeatedly. (Smaller string-sets had too many statistical irregularities for reliable training.)
Ten trials were run using different training sets. In nine cases, RCC achieved perfect
performance after building two hidden units; in the tenth, three hidden units were built.
Average training time was 195.5 epochs, or about 25,000 string presentations. (An epoch
is defined as a single pass through a fixed training set.) In every case, the trained network
achieved a perfect score on a set of 128 new strings not used in training. This study used a
pool of 8 candidate units.
Cleeremans et ale also explored the "embedded Reber grammar" shown in figure 1b. Each
193
194
Fahlman
of the boxes in the figure is a transition graph identical to the original Reber grammar.
In this much harder task, the network must learn to predict the final T or P correctly. To
accomplish this, the network must note the initial T or P and must retain this information
while processing an "embedded clause" of arbitrary length. It is difficult to discover this
rule from example strings, since the embedded clauses may also contain many T's and P's,
but only the initial T or P correlates reliably with the final prediction. The "signal to noise
ratio" in this problem is very poor.
The standard Elman net was unable to learn this task, even with 15 hidden units and 250,000
training strings. However, the task could be learned partially (correct prediction in about
70% of test strings) if the two copies of the embedded grammar were differentiated by
giving them slightly different transition probabilities.
RCC was run six times on the more difficult symmetrical form of this problem. A candidate
pool of 8 units was used. Each trial used a different set of 256 training strings and the
resulting network was tested on a separate set of 256 strings. As shown in the table below,
perfect performance was achieved in about half the trial runs, requiring 7 -9 hidden units
and and average of 713 epochs (182K string -presentations). 1\vo of the remaining networks
perform at the 99+% level, and one got stuck. (Trial 6 is a successful second run on the
same test set used in trial 5.)
Trial
1
2
3
4
5
6
Hidden
Units
9
7
15
11
13
9
Epochs
Needed
831
602
1256
910
1063
707
Tram Set
Errors
0
0
0
0
11
0
Test Set
Errors
0
0
2
1
16
0
Smith and Zipser[Smith, 1989] have studied the same grammar-learning tasks using the
time-continuous "Real-Time Recurrent Learning" (or "RTRL") architecture developed by
Williams and Zipser[Williams, 1989]. They report that a network with seven visible (combined input-output) units, two hidden units, and full inter-unit connectivity is able to learn
the simple Reber grammar task after presentation of 19,000 to 63,000 distinct training
strings. On the more difficult embedded grammar task, Smith and Zipser report that RTRL
learned the task perfectly in some (unspecified) fraction of attempts. Successful runs ranged
from 3 hidden units (173K distinct training strings) to 12 hidden units (25K strings). RTRL
is able to deal with discrete or time-continuous problems, while RCC deals only in discrete
events. On the other hand, RTRL requires more computation than RCC in processing each
training example, and RTRL scales up poorly as network size increases.
3 EMPIRICAL RESULTS: LEARNING MORSE CODE
Another series of experiments tested the ability of an RCC network to learn the Morse
code patterns for the 26 English letters. While this task requires no generalization, it
does demonstrate the ability of this architecture to recognize a long, rather complex set of
patterns. It also provides an opportunity to demonstrate RCC's ability to learn a new task
in small increments. This study assumes that the dots and dashes arrive at precise times; it
does not address the problem of variable timing.
The Recurrent Cascade-Correlation Architecture
The network has one input and 27 outputs: one for each letter and a "strobe" output
signalling that a complete letter has been recognized. A dot is represented as a logical one
(positive input) followed by a logical zero (negative); a dash is two ones followed by a
zero. A second consecutive zero marks the end of the letter. When the second zero is seen
the network must raise the strobe output and one of the other 26; at all other times. the
outputs are zero. For example, the "...-" pattern for the letter V would be encoded as the
input sequence" 1010101100". The letter patterns vary considerably in length, from 3 to 12
time-steps, with an average of 8. During training, the network's state is zeroed at the start
of each new letter; once the network is trained. the strobe output could be used to reset the
network.
In one series of trials. the training set included the codes for all 26 letters at once (226
time-steps in all). In ten trials. the network learned the task perfectly in every case, building
an average of 10.5 hidden units and requiring an average of 1321 passes through the entire
training set. Note that the system does not require a distinct hidden unit for each letter or
for each time-slice in the longest sequence.
In a second experiment, we divided the training into a series of short "lessons" of increasing
difficulty. The network was first trained to produce the strobe output and to recognize the
two shortest letters, E and T. This task was learned perfectly, usually with the creation of 2
hidden units. We then set aside the "ET" set and trained successively on the following sets:
"AIN", "DGHKRUW", "BFLOV", and "CJPQXYZ". As a rule, each of these lessons adds
one or two new hidden units, building upon those already present. Finally we train on all
26 characters at once, which generally adds 2-3 more units to the existing set.
In ten trials, the incremental version learned the task perfectly every time, requiring an
average total of 1427 epochs and 9.6 hidden units-slightly fewer than the number of units
added in block training. While the epoch count is slightly greater than in the block-training
experiment. most of these epochs are run on very small training sets. The incremental
training required only about half as much total runtime as the block training. For learning
of even more complex temporal sequences, incremental training of this kind may prove
essential.
Our approach to incremental training was inspired to some degree by the work reported in
[Waibel, 1989] in which small network modules were trained separately, frozen, and then
combined into a composite network with the addition of some "glue" units. However, in
RCC only the partitioning of the training set is chosen by the user; the network itself builds
the appropriate internal structure, and new units are able to build upon hidden units created
during some earlier lesson.
4 CONCLUSIONS
RCC sequential processing to Cascade-Correlation, while retaining the advantages of the
original version: fast learning, good generalization, automatic choice of network topology.
ability to create complex high-order feature detectors, and incremental learning. The
grammar-learning experiments suggest that RCC is more powerful than standard Elman
networks in learning to recognize subtle patterns in sequential data. The RTRL scheme of
Williams and Zipser may be equally powerful. but RTRL is more complex and does not
scale up well when larger networks are needed.
On the negative side, RCC deals in discrete time-steps and not in continuous time. An
195
196
Fahlman
interesting direction for future research is to explore the use of an RCC-like structure with
units whose memory of past state is time-continuous rather than discrete.
Acknowledgments
I would like to thank Paul Gleichauf, Dave Touretzky, and Alex Waibel for their help and
useful suggestions. This research was sponsored in part by the National Science Foundation
(Contract EET-87 16324) and the Defense Advanced Research Projects Agency (Contract
F33615-90-C-1465).
References
[Cleeremans, 1989] Cleeremans, A., D. Servan-Schreiber, and J. L. McClelland (1989)
"Finite-State Automata and Simple Recurrent Networks" in Neural
CompJltation 1, 372-381.
[Elman,1988]
Elman, J. L. (1988) "Finding Structure in Time," CRL Tech Report
8801, Univ. of California at San Diego, Center for Research in Language.
[Fahlman,1988]
Fahlman, S. E. (1988) "Faster-Learning Variations on BackPropagation: An Empirical Study" in Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann.
[Fahlman, 1990]
Fahlman, S. E. and C. Lebiere (1988) ''The Cascade-Correlation
Learning Architecture" in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann.
[Mozer,1988]
Mozer, M. C. (1988) "A Focused Back-Propagation Algorithm for
Temporal Pattern Recognition," Tech Report CRG-1R-88-3, Univ. of
Toronto, Dept. of Psychology and Computer Science.
[Smith,1989]
Smith, A. W. and D. Zipser (1989) "Learning Sequential Structure
with the Real-TIme Recurrent Learning Algorithm" in International
Journal ofNeural Systems, Vol. 1, No.2, 125-131.
Waibel, A. (1989) "Consonant Recognition by Modular Construction
of Large Phonemic TIme-Delay Neural Networks" in D. S. Touretzky
(ed.),Advances in Neural Information Processing Systems 1, Morgan
Kaufmann.
[Waibel, 1989]
[Williams,1989]
Williams, R. J. and D. Zipser (1989) "A learning algorithm for con tinually running fully recurrent neural networkS," Neural Computation
1,270-280.
Part V
Speech
| 330 |@word trial:9 version:4 glue:1 quickprop:2 propagate:1 dramatic:1 tr:1 harder:1 initial:2 contains:1 score:3 series:3 tram:1 rightmost:2 past:3 existing:2 current:4 yet:1 must:6 visible:1 sponsored:1 aside:2 alone:1 greedy:1 half:2 guess:1 fewer:1 signalling:1 smith:5 short:2 provides:1 node:1 toronto:1 along:2 prove:1 inter:1 elman:12 multi:2 inspired:1 automatically:1 little:1 increasing:1 becomes:1 begin:3 discover:1 project:1 kind:2 unspecified:1 string:25 developed:1 finding:1 temporal:2 every:3 act:1 runtime:1 partitioning:1 unit:63 rcc:16 positive:3 negligible:2 dropped:1 timing:1 treat:1 path:1 might:1 plus:2 resembles:1 studied:1 range:1 directed:1 fah:1 acknowledgment:1 testing:1 practice:1 block:3 irregularity:1 backpropagation:1 procedure:1 empirical:2 evolving:1 cascade:15 thought:1 hyperbolic:1 got:1 composite:1 suggest:1 cannot:1 layered:1 influence:1 instability:1 map:1 center:1 williams:5 automaton:1 focused:2 adjusts:1 rule:3 variation:1 increment:1 construction:2 diego:1 user:2 pa:1 recognition:3 module:1 cleeremans:6 connected:1 mozer:5 agency:1 broken:1 trained:10 raise:1 creation:1 upon:2 various:1 represented:1 train:2 univ:2 distinct:4 fast:2 activate:1 whose:2 encoded:1 widely:1 larger:2 modular:1 otherwise:1 grammar:20 ability:4 itself:1 final:2 sequence:7 advantage:2 frozen:3 net:11 reset:1 loop:2 poorly:1 produce:1 incremental:6 leave:1 perfect:4 help:1 recurrent:25 school:2 phonemic:1 job:2 direction:1 correct:2 successor:4 backprop:3 require:2 generalization:5 traversed:1 adjusted:1 insert:1 crg:1 hold:1 considered:1 predict:2 vary:1 consecutive:1 ain:1 largest:1 schreiber:2 create:2 weighted:3 reflects:1 rough:1 always:2 rather:2 avoid:1 improvement:1 longest:1 tech:2 entire:1 initially:1 hidden:35 going:1 augment:1 retaining:2 equal:1 once:7 never:4 identical:1 future:1 report:5 connectionist:1 recognize:3 national:1 phase:1 attempt:3 investigate:1 adjust:1 edge:2 unless:2 old:1 desired:2 re:2 minimal:3 earlier:1 servan:2 retains:1 delay:1 successful:2 too:1 reported:2 accomplish:1 considerably:1 combined:2 international:1 retain:2 contract:2 pool:3 together:1 connectivity:2 successively:1 choose:3 derivative:2 style:1 li:1 aggressive:1 account:1 permanent:1 apparently:1 start:2 minimize:2 kaufmann:3 lesson:4 produced:1 researcher:2 dave:1 history:1 detector:3 explain:1 touretzky:3 ed:2 lebiere:2 associated:1 con:1 stop:2 logical:2 subtle:1 back:6 feed:2 supervised:1 response:1 formulation:1 evaluated:1 though:2 strongly:1 done:1 box:1 just:1 correlation:19 until:1 hand:3 receives:2 propagation:3 incrementally:1 perhaps:1 building:4 effect:4 concept:1 true:1 contain:1 requiring:3 ranged:1 symmetric:1 deal:3 during:6 self:6 complete:1 demonstrate:2 vo:1 sigmoid:1 clause:2 interpretation:1 mellon:1 freeze:1 automatic:2 language:1 had:1 dot:2 add:6 own:2 showed:1 store:1 seen:2 morgan:3 additional:3 greater:1 recognized:1 determine:1 maximize:3 shortest:1 ale:2 signal:1 multiple:1 violate:1 full:1 faster:2 long:1 divided:1 equally:1 reber:8 prediction:5 sometimes:3 tailored:1 achieved:3 addition:1 separately:1 diagram:2 extra:1 eliminates:1 pass:2 tend:1 zipser:6 near:3 presence:1 backwards:1 enough:1 fit:1 gleichauf:1 psychology:1 architecture:15 competing:1 perfectly:5 topology:4 reduce:1 whether:1 six:1 defense:1 speech:1 oscillate:1 cause:1 repeatedly:1 nine:1 generally:1 useful:1 ten:3 mcclelland:2 generate:1 correctly:1 carnegie:1 discrete:5 vol:1 indefinite:1 changing:1 tenth:1 timestep:1 grab:1 graph:3 fraction:1 sum:1 run:9 letter:15 powerful:2 place:1 arrive:1 layer:7 bound:1 dash:2 followed:2 summer:1 alex:1 f33615:1 waibel:4 poor:1 cascadecorrelation:1 smaller:2 slightly:3 character:2 rtrl:7 wi:1 legal:5 equation:1 eventually:1 count:1 needed:4 flip:1 fed:1 end:2 adopted:1 operation:1 eight:1 differentiated:1 appropriate:1 save:1 gate:1 original:3 assumes:2 remaining:2 running:1 trouble:1 opportunity:1 giving:1 build:4 move:1 added:6 already:2 question:1 dependence:1 usual:1 ow:4 link:3 separate:2 unable:1 thank:1 ofneural:1 seven:3 reason:1 length:4 code:3 ratio:1 difficult:3 negative:4 reliably:1 perform:2 finite:5 flop:1 defining:1 precise:1 arbitrary:1 required:2 connection:8 california:1 learned:5 address:1 able:5 below:1 pattern:6 scott:1 usually:1 built:4 including:1 memory:3 reliable:1 strobe:4 event:1 difficulty:1 force:1 turning:1 residual:3 advanced:1 scheme:1 altered:1 created:1 epoch:7 tenure:2 tangent:1 morse:2 embedded:6 fully:1 interesting:1 suggestion:1 foundation:1 degree:1 zeroed:2 production:1 squashing:1 course:3 token:1 fahlman:11 slowdown:1 free:1 copy:1 english:1 side:2 slice:1 depth:1 world:1 cumulative:1 transition:4 forward:1 adopts:2 stuck:1 san:1 correlate:1 eet:1 unreliable:1 active:3 incoming:2 sequentially:1 pittsburgh:1 symmetrical:1 consonant:1 continuous:4 table:1 learn:9 reasonably:1 ignoring:1 improving:1 complex:4 noise:2 paul:1 repeated:1 slow:1 candidate:11 learns:1 explored:1 essential:2 adding:1 effectively:1 sequential:3 explore:1 partially:2 presentation:3 crl:1 content:1 change:2 considerable:1 included:1 specifically:2 typical:1 operates:1 determined:2 contradictory:1 called:2 total:3 pas:1 la:1 internal:2 mark:1 dept:1 tested:2 |
2,537 | 3,300 | Bayesian Inference for Spiking Neuron Models
with a Sparsity Prior
Sebastian Gerwinn
Jakob H Macke
Matthias Seeger
Matthias Bethge
Max Planck Institute for Biological Cybernetics
Spemannstrasse 41
72076 Tuebingen, Germany
{firstname.surname}@tuebingen.mpg.de
Abstract
Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such
models. Using the expectation propagation algorithm, we are able to approximate
the full posterior distribution over all weights. In addition, we use a Laplacian
prior to favor sparse solutions. Therefore, stimulus features that do not critically
influence neural activity will be assigned zero weights and thus be effectively
excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior
distribution can be used to obtain confidence intervals which makes it possible
to assess the statistical significance of the solution. In neural data analysis, the
available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior
and uncertainty estimates for the model parameters are essential. We apply our
method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between
neurons. Furthermore we used the sparsity of the Laplace prior to select those
filters from a spike-triggered covariance analysis that are most informative about
the neural response.
1
Introduction
A central goal of systems neuroscience is to identify the functional relationship between environmental stimuli and a neural response. Given an arbitrary stimulus we would like to predict the neural
response as well as possible. In order to achieve this goal with limited amount of data, it is essential
to combine the information in the data with prior knowledge about neural function. To this end,
generalized linear models (GLMs) have proven to be particularly useful as they allow for flexible
model architectures while still being tractable for estimation.
The GLM neuron model consists of a linear filter, a static nonlinear transfer function and a Poisson
spike generating mechanism. To determine the neural response to a given stimulus, the stimulus
is first convolved with the linear filter (i.e. the receptive field of the neuron). Subsequently, the
filter output is converted into an instantaneous firing rate via a static nonlinear transfer function,
and finally spikes are generated from an inhomogeneous Poisson-process according to this firing
rate. Note, however, that the GLM neuron model is not limited to describe neurons with Poisson
firing statistics. Rather, it is possible to incorporate influences of its own spiking history on the
neural response. That is, the firing rate is then determined by a combination of both the external
1
stimulus and the spiking-history of the neuron. Thus, the model can account for typical effects
such as refractory periods, bursting behavior or spike-frequency adaptation. Last but not least, the
GLM neuron model can also be applied for populations of coupled neurons by making each neuron
dependent not only on its own spiking activity but also on the spiking history of all the other neurons.
In previous work (Pillow et al., 2005; Chornoboy et al., 1988; Okatan et al., 2005) it has been
shown how point-estimates of the GLM-parameters can be obtained using maximum-likelihood (or
maximum a posteriori (MAP)) techniques. Here, we extend this approach one step further by using
Bayesian inference methods in order to obtain an approximation to the full posterior distribution,
rather than point estimates. In particular, the posterior determines confidence intervals for every
linear weight, which facilitates the interpretation of the model and its parameters. For example, if
a weight describes the strength of coupling between two neurons, then we can use these confidence
intervals to test whether this weight is significantly different from zero. In this way, we can readily
distinguish statistical significant interactions between neurons from spurious couplings.
Another application of the Bayesian GLM neuron model arises in the context of spike-triggered
covariance analysis. Spike-triggered covariance basically employs a quadratic expansion of the
external stimulus parameter space and is often used in order to determine the most informative
subspace. By combining spike-triggered covariance analysis with the Bayesian GLM framework,
we will present a new method for selecting the filters of this subspace.
Feature selection in the GLM neuron model can be done by the assumption of a Laplace prior over
the linear weights, which naturally leads to sparse posterior solutions. Consequently, all weights
are equally strongly pushed to zero. This contrasts the Gaussian prior which pushes weights to zero
proportional to their absolute value. In this sense, the Laplace prior can also be seen as an efficient
regularizer, which is well suited for the situation when a large range of alternative explanations for
the neural response shall be compared on the basis of limited data. As we do not perform gradient
descent on the posterior, differentiability of the posterior is not required.
The paper is organized as follows: In section 2, we describe the model, and the ?expectation propagation? algorithm (Minka, 2001; Opper & Winther, 2000) used to find the approximate posterior
distribution. In section 3, we estimate the receptive fields, spike-history effects and functional couplings of a small population of retinal ganglion cells. We demonstrate that for small training sets,
the Laplace-prior leads to superior performance compared to a Gaussian-prior, which does not lead
to sparse solutions. We use the confidence intervals to test whether the functional couplings between
the neurons are significant.
In section 4, we use the GLM neuron model to describe a complex cell response recorded in macaque
primary visual cortex: After computing the spike-triggered covariance (STC) we determine the
relevant stimulus subspace via feature selection in our model. In contrast to the usual approach, the
selection of the subspace in our case becomes directly linked to an explicit neuron model which also
takes into account the spike-history dependence of the spike generation.
2
2.1
Generalized Linear Models and Expectation Propagation
Generalized Linear Models
Let Xt ? Rd , t ? [0, T ] denote a time-varying stimulus and Di = {ti,j } the spike-times of i =
1, . . . , n neurons. Here Xt consists of the sensory input at time t and can include preceeding input
frames as well. We assume that the stimulus can only change at distinct time points, but can be
evaluated at continous time t. We would like to incorporate spike-history effects, couplings between
neurons and dependence on nonlinear features of the stimulus. Therefore, we describe the effective
input to a neuron via the following feature-map:
M
?sp ({ti,j ? Di : ti,j < t}),
?(t) = ?st (Xt )
i
where ?sp represents the spike time history and ?st the possibly nonlinear feature map for the
stimulus. That is, the complete feature vector ? contains possibly nonlinear features of the stimulus
and the spike history of every neuron. Any feature which is causal in the sense that it does not
depend on future events can be used. We model the spike history dependence by a set of small time
2
windows [t ? ?l , t ? ?l0 ) in which occuring spikes are counted.
X
1[t??l ,t??l0 ) (ti,j ) ,
(?sp,i ({ti,j ? Di : ti,j < t}))l =
j:ti,j <t
where 1[a,b) (t) denotes the indicator function which is one if t ? [a, b) and zero otherwise. In other
words, for each neuron there is a set of windows l = 1, . . . , L with time-lags ?l and width ?l ? ?l0
describing its spiking history. More precisely, the rate can only change if the stimulus changes
or a spike leaves or enters one of these windows. Thus, we obtain a sequence of changepoints
0 = t?0 < t?1 < ? ? ? < t?j < ? ? ? < T , where each feature ? i (t) is constant in [t?j?1 , t?j ), attaining the
value ?i,j . In the GLM neuron model setting the instantanious firing rate of neuron i is obtained by
a linear filter of the feature map:
p(spike|Xt , {ti,j ? D : ti,j < t})
= ?(wiT ?(t)),
(1)
where ? is the nonlinear transfer function. Following general point process theory (Snyder & Miller,
1991) and using the fact that the
Qnfeatures stay constant between two changepoints we can write down
the likelihood P (D|{w}) = i=1 Li (wi ), where each Li (wi ) has the form
Y
?i,j (ui,j ), ui,j = wiT ?j ,
Li (wi ) ?
j
?i,j (ui,j )
P
= ?i (ui,j )
t?Di
?(t?t?j )
exp(??i (ui,j )(t?j ? t?j?1 )) .
The function ?(.) in the second equation is defined to be one if and only if its argument equals zero.
The sum therfore is 1 iff a spike of neuron i occurs at changepoint t?j . Note that the changepoints
t?j depend on the spikes and therefore, the process is not Poissonian, as it might be suggested by the
functional form of the likelihood.
As it has been shown in (Paninski, 2004), the likelihood is log-concave in wi if ?i (?) is both convex
and log-concave. We are using the transfer function ?i (u) = eu which, in particular, gives rise to a
log-linear point process model. Alternatively, one could also use ?i (u) = eu 1u<0 + (1 + u)1u?0 ,
which grows only linearly (cf. Harris et al. (2003); Pillow et al. (2005)).
While we require all rates ?i (t) to be piecewise constant, it should be noted that we do not restrict
ourselves to a uniform quantization of the time axis. In this way, we achieve an efficient architecture
for which the density of change points automatically adapts to the speed with which the input signal
is changing.
The choice of the prior distribution can play a central role when coping with limited amount of data.
We use a Laplace prior distribution over the weights in order to favor sparse solutions over those
which explain the data equally well but require more weights different from zero (c.f. Tibshirani
(1996)):
Y
P (wi ) ?
e??k |wk,i | .
(2)
k
?k
2
Thus, prior factors have the form ?i,k (ui,k ) =
exp(??k |ui,k |) with ?k = (1l=k )l and ui,k =
wiT ?k as above. In our applications, we allowed the prior variance ?22 of the stimulus-dependent
k
features to be different from the variance of the spike-history features. The posterior takes the form:
Y
P (w|D) ?
?i,j (ui,j ),
i,j
where each ?i,j individually instantiates a Generalized Linear Model (either corresponding to a
likelihood factor or to a prior factor). As the posterior factorizes over neurons, we can perform
our analysis for each neuron seperately. Therefore, for simplicity we drop the subscript i in the
following.
Our model does not assume or require any specific stimulus distribution. In particular, it is not
limited to white noise stimuli or elliptically contoured distributions but it can be used without modification for other stimulus distributions such as natural image sequences. Finally, this framework
allows exact sampling of spike trains due to the piecewise constant rate.
3
2.2
Expectation Propagation
As exact Bayesian inference is intractable in our model, we seek to find a good approximation to the
full posterior. In our case all likelihood and prior factors are log-concave. Therefore, the posterior is
unimodal and a Gaussian approximation is well suited. A frequently used technique for this purpose
is the Laplace-approximation which computes a quadratic approximation to the log-posterior based
on the Hessian around the maximum. For the Laplacian prior, however, this approach falls short
since the distribution is not differentiable at zero. Instead, we employ the Expectation Propagation
(EP) algorithm (Minka, 2001; Opper & Winther, 2000). In this approximation technique, each factor
(also called site) ?j of the posterior is replaced by an unnormalised Gaussian:
N U (uj |bj , ?j )
=
1
? j ),
exp(? ?j u2j + bj uj ) =: ?(u
2
?j ? 0
where the bj , ?j are called the site parameters. The approximation aims at minimizing the KullbackQ ?
Leibler divergence between the full posterior P (w|D) and the approximation, Q(w) ? j ?(u
j ).
The log-concavity of the model implies that all ?j ? 0, which supports the numerical stability of
the EP algorithm. Some of the ?j may even be 0, as long as Q(w) is a (normalizable) Gaussian. An
EP update at j consists of computing the Gaussian cavity distribution Q\j ? Q???1
and the nonj
Gaussian tilted distribution P? ? Q\j ?j , then updating bj , ?j such that the new Q0 has the same
mean and covariance as P? (moment matching). This is iterated in random order over the sites until
convergence.
We omit the detailed update schemes here and refer to (Seeger et al., 2007; Seeger, 2005). Convergence guarantees for EP applied to non-Gaussian log-concave models have not been shown so far.
Nevertheless it is reported that at least in the log-concave case EP behaves stable (e.g., Rasmussen
& Williams (2006)), and we observe quick convergence in our case ( ? 20 iterations over all sites
are required). The model still contains hyperparameters, namely the prior variances ?22 . In each exk
periment, these were determined via a standard crossvalidation procedure (80% training data, 10%
validation, 10% test).
3
Modeling retinal ganglion cells: Which cells are functionally coupled?
We applied the GLM neuron model to multi-electrode recordings of three rabbit retinal ganglion
cells. The stimulus consisted of 32767 frames each of which showing a random 16?16 checkerboard
pattern with a refresh rate of 50 Hz (data provided by G. Zeck, see (Zeck et al., 2005)).
neg. log likelihood score on test-set
First, in order to investigate the role of the
Laplace prior, we trained a single cell GLM neuron model on datasets of different sizes with either a Laplace prior or a Gaussian prior. The
models, which have the same number of parameters, were compared by evaluating their negative log-likelihood on an independent test set. As
can be seen on the right the choice of prior becomes less important for large training sets as the
weights are sufficiently constrained by the data.
For each training set size a separate crossvalidation was carried out. Errorbars were obtained by
training data-set size (% of complete dataset)
drawing 100 samples from the posterior.
Fig. 1 shows the spatiotemporal receptive field of each neuron, as well as the filters describing
the influence of spiking history and input from other cells. For conciseness, we only plot the filters
for 80 and 120 ms time lags, but the fitted model included 60 and 140 ms time lags as well. The
strongly positive weights on the diagonal of figure 1(c) for the spiking history can be interpreted
as ?self-excitation?. In this way, it is possible to model the bursting behavior exhibited by the cells
in our recordings (see also Fig. 2). The strongly negative weights at small time lags represent refractory periods. The red lines correspond to 3 standard deviations of the posterior. The first neuron
seems to elicit ?bursts? at lower frequencies. Note the different scaling of the y-axis for diagonal and
off-diagonal terms. By analyzing the coupling terms, we can see that there is significant interaction
4
between cells 2 and 3, but not between any other pair of cells. As our prior assumption is that the
couplings are 0, this interaction-term is not merely a consequence of our choice of prior. As a result
of our crossvalidation it turns out that the prior variance for spike history weights should be set to
very large values (?= 0.1, variance = 2 ?12 ) meaning that these are well determinated by the data. In
contrast, prior variances for the stimulus weights should be more strongly biased towards zero (? =
150).
(a) GLM
(b) STA
(c)
Figure 1: (a): Stimulus dependence inferred by the GLM for the three neurons (columns) at different
time lags (rows). 2 of 4 time lags are plotted (60, 140 ms not shown). (b): Spike-triggered average
for the same neurons and time lags as in (a). (c): Causal dependencies between the three neurons.
Each plot shows the value of the linear weight as a function of increasing time lag ?l (in ms). Shown
are posterior mean and three std. dev. (indicated in red). Different scaling of the y-axis is used for
diagonal and off-diagonal plots.
Neuron 1
Neuron 2
Neuron 3
Mean
STA
0.2199
0.1746
0.1828
0.1924
GLM
0.2442
0.2348
0.3319
0.2703
GLM with couplings
0.3576
0.3320
0.4202
0.3699
Table 1: Predictions performance of different models. Entries correspond to the correlation coefficient between the predicted rate of each model and spikes on a test set. Both rate and spikes are
binned in 5 ms bins. The first GLM models neither connections nor self-feedback.
Because of the regularization by the prior the spatio-temporal receptive fields are much smoother
than the spike-triggered average ones, see Fig. 1(a). The receptive fields of the STA seems to be
5
Figure 2: Predicted rate for the GLM neuron model with and without any spike history and the
predicted rate for the STA for the same neurons as in the other plots. For the STA the linear response
is rectified. Rate for the GLM with spike dependence is obtained by averaging over 1000 sampled
spike-trains. Rates are rescaled to have the same standard deviation.
more smeared out which might be due to the fact that it cannot model bursting behavior. The more
conservative estimate of the sparse neuron model should increase the prediction performance. To
verify this, we calculated the linear response from the spike-triggered average and the rate of our
GLM neuron model. In order to have the same number of parameters we neglected all connections.
As a model free performance measure we used the correlation coefficient between the spike trains
and the rates (each are binned in 5 ms bins). For the GLM with couplings, rates were estimated
by sampling 1000 spike trains with the posterior mean as linear weights. As our model explicitly
includes the nonlinearity during fitting, the rate is more sharply peaked around the spikes, see Fig. 2.
The prediction performance can be increased even further by modeling couplings between neurons
as summarized in Tab. 1.
4
Modeling complex cells: How many filters do we need?
Complex cells in primary visual cortex exhibit strongly nonlinear response properties which cannot
be well described by a single linear filter, but rather requires a set of filters. A common approach
for finding these filters is based on the covariance of the spike-triggered ensemble: Eigenvectors of
eigenvalues that are much bigger (or smaller) than the eigenvalues of the whole stimulus ensemble
indicate directions in stimulus space to which the cell is sensitive to. Usually, a statistical hypothesis
test on the eigenvalue-spectrum is used to decide how many of the eigenvectors ei are needed to
model the cells (Simoncelli et al., 2004; Touryan et al., 2002; Rust et al., 2005; Steveninck & Bialek,
1988). Here, we take a different approach: We use the confidence intervals of our GLM neuron
model to determine the relevant dimensions within the subspace revealed by STC. We first apply
STC to find the space spanned by a set of eigenvectors that is substantially larger than the expected
dimensionality of the relevant subspace. Next, we fit a nonlinear function ni to the filter-outputs
fi (Xt ) = hXt , ei i. Finally, we linearly combine the ni (t), resulting in a model of the same form as
equation (1) with (?st )i (Xt ) = ni (fi (Xt ))
6
(a)
(b)
Figure 3: (a): 24 out of 40 Filters estimated by STC. The filters are ordered according to their logratio of their eigenvalue to the corresponding eigenvalue of the complete stimulus ensemble (from
left to right). Highlighted filter are those with significant non-zero weights, red indicating excitatory
and blue inhibitory filters. (b) Upper: Posterior mean +/- 3 std. dev. Filter indices are ordered in the
same way as in (a). Lower: Predicted rate on a test set for STC and for the GLM neuron model with
spike history dependence on a test set.
As the model is linear in the weights wi , we can use the GLM neuron model to fit these weights
and obtain confidence intervals. If a filter fi (t) is not needed for explaining the cells response, its
corresponding weight wi will automatically be set to zero by the model due to the sparsity prior.
This provides an alternative, model-based method of determining the number of filters required to
model the cell. The significance of each filter is not determined by a separate hypothesis test on
the spectrum of the spike-triggered covariance, but rather by assessing its influence on the neural
activity within the full model.
As in the previous application, we can model the spike history effects with an additional feature
vector ?sp to take into account temporal dynamics of single neurons or couplings.
Before applying our method to real data, we tested it on data generated from an artificial complex cell
similar to the one in (Rust et al., 2005). On this simulated data we were able to recover the original
filters. We then fitted this GLM neuron model to data recorded from a complex cell in primary visual
cortex of an anesthetized macaque monkey (same data as in (Rust et al., 2005)). We first extracted
40 filters which eigenvalues were most different to their corresponding eigenvalues of the complete
stimulus ensemble. Any nonlinear regression procedure could be used to fit a nonlinearity to each
filter output. We used a simple quadratic regression technique. Having fixed the first nonlinearity
we approximated the posterior as above. The resulting confidence intervals for the linear weights
are plotted in Fig. 3(b). The filters with significant non-zero weights are highlighted in Fig. 3(a).
Red indicates exitatory and blue inhibitory effects on the firing rate. Using 3 std. dev. confidence
intervals 9 excitatory and 8 inhibitory filters turned out to be significant in our model. The number
of filters is similar to that reportet in Rust et al., who regarded 7 excitatory and 7 inhibitory filters as
significant (Rust et al., 2005). The rank order of the linear weights is closely related but not identical
to the order of eigenvalues, as can be seen in Fig. 3(b), top.
5
Summary and Conclusions
We have shown how approximate Bayesian inference within the framework of generalized linear
models can be used to address the problem of identifying relevant features of neural data. More
precisely, the use of a sparsity prior favors sparse posterior solutions: non-zero weights are assigned
only to those features which which are critical for explaining the data. Furthermore, the explicit
7
uncertainty information obtained from the posterior distribution enables us to identify ranges of statistical significance and therefore facilitates the interpretation of the solution. We used this technique
to determine couplings between neurons in a multi-cell recording and demonstrated an increase in
prediction performance due to regularization by the sparsity prior. Also, in the context of spiketriggered covariance analysis, we used our method to determine the relevant stimulus subspace
within the space spanned by the eigenvectors. Our subspace selection method is directly linked
to an explicit neuron model which also takes into account the spike-history dependence of the spike
generation.
Acknowledgements
We would like to thank G?unther Zeck and Nicole Rust for generously providing their data and for
useful discussions.
References
Chornoboy, E., Schramm, L., & Karr, A. (1988). Maximum likelihood identification of neural point
process systems. Biological Cybernetics, 59, 265-275.
Harris, K., Csicsvari, J., Hirase, H., Dragoi, G., & Buzsaki, G. (2003). Organization of cell assemblies in the hippocampus. Nature, 424(6948), 552?6.
Minka, T. (2001). Expectation propagation for approximate Bayesian inference. Uncertainty in
Artificial Intelligence, 17, 362?369.
Okatan, M., Wilson, M. A., & Brown, E. N. (2005). Analyzing functional connectivity using a
network likelihood model of ensemble neural spiking activity. Neural Computation, 17, 19271961.
Opper, M., & Winther, O. (2000). Gaussian Processes for Classification: Mean-Field Algorithms.
Neural Computation, 12(11), 2655-2684.
Paninski, L. (2004). Maximum likelihood estimation of cascade point-process neural encoding
models. Network, 15(4), 243?262.
Pillow, J. W., Paninski, L., Uzzell, V. J., Simoncelli, E. P., & Chichilnisky, E. J. (2005). Prediction
and decoding of retinal ganglion cell responses with a probabilistic spiking model. J Neurosci,
25(47), 11003?11013.
Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. Springer.
Rust, N., Schwartz, O., Movshon, J., & Simoncelli, E.(2005). Spatiotemporal Elements of Macaque
V1 Receptive Fields. Neuron, 46(6), 945?956.
Seeger, M. (2005). Expectation propagation for exponential families (Tech. Rep.). University of
California at Berkeley. (See www.kyb.tuebingen.mpg.de/bs/people/seeger.)
Seeger, M., Steinke, F., & Tsuda, K. (2007). Bayesian inference and optimal design in the sparse
linear model. AI and Statistics.
Simoncelli, E., Paninski, L., Pillow, J., & Schwartz, O. (2004). Characterization of neural responses
with stochastic stimuli. In M. Gazzaniga (Ed.), (Vol. 3, pp. 327?338). MIT Press.
Snyder, D., & Miller, M. (1991). Random point processes in time and space. Springer Texts in
Electrical Engineering.
Steveninck, R., & Bialek, W. (1988). Real-Time Performance of a Movement-Sensitive Neuron in
the Blowfly Visual System: Coding and Information Transfer in Short Spike Sequences. Proceedings of the Royal Society of London. Series B, Biological Sciences, 234(1277), 379?414.
Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal
Statistical Society. Series B (Methodological), 58(1), 267?288.
Touryan, J., Lau, B., & Dan, Y. (2002). Isolation of Relevant Visual Features from Random Stimuli
for Cortical Complex Cells. Journal of Neuroscience, 22(24), 10811.
Zeck, G. M., Xiao, Q., & Masland, R. H. (2005). The spatial filtering properties of local edge
detectors and brisk-sustained retinal ganglion cells. Eur J Neurosci, 22(8), 2016-26.
8
| 3300 |@word seems:2 hippocampus:1 seek:1 covariance:9 exitatory:1 moment:1 contains:2 score:1 selecting:1 series:2 readily:1 refresh:1 tilted:1 numerical:1 informative:2 enables:1 kyb:1 drop:1 plot:4 update:2 intelligence:1 leaf:1 short:2 provides:1 characterization:1 burst:1 consists:3 sustained:1 combine:2 fitting:1 dan:1 expected:1 behavior:3 mpg:2 frequently:1 nor:1 multi:3 automatically:2 window:3 increasing:1 becomes:2 provided:1 interpreted:1 substantially:1 monkey:1 finding:1 guarantee:1 temporal:2 berkeley:1 every:2 ti:9 concave:5 schwartz:2 zeck:4 omit:1 planck:1 okatan:2 positive:1 before:1 engineering:1 local:1 consequence:1 encoding:1 analyzing:2 subscript:1 firing:6 might:2 bursting:3 limited:6 range:2 steveninck:2 procedure:2 coping:1 elicit:1 significantly:1 cascade:1 matching:1 confidence:8 word:1 cannot:2 selection:6 context:2 influence:4 applying:1 www:1 map:4 quick:1 demonstrated:1 nicole:1 williams:2 convex:1 rabbit:1 wit:3 simplicity:1 preceeding:1 identifying:1 regarded:1 spanned:2 population:2 stability:1 laplace:8 play:1 exact:2 hypothesis:2 element:1 approximated:1 particularly:1 updating:1 std:3 ep:5 role:2 exk:1 enters:1 u2j:1 electrical:1 eu:2 movement:1 rescaled:1 ui:9 neglected:1 dynamic:1 trained:1 depend:2 predictive:1 basis:1 regularizer:1 train:4 distinct:1 describe:5 effective:1 london:1 artificial:2 lag:8 larger:1 drawing:1 otherwise:1 favor:3 ability:1 statistic:2 highlighted:2 triggered:10 sequence:3 differentiable:1 matthias:2 eigenvalue:8 interaction:3 adaptation:1 relevant:6 combining:1 turned:1 iff:1 achieve:2 adapts:1 buzsaki:1 crossvalidation:3 convergence:3 electrode:2 assessing:1 generating:1 coupling:13 predicted:4 implies:1 indicate:1 direction:1 inhomogeneous:1 closely:1 filter:28 subsequently:1 stochastic:1 bin:2 require:3 surname:1 biological:3 around:2 sufficiently:1 exp:3 predict:1 bj:4 changepoint:1 purpose:1 estimation:2 sensitive:2 individually:1 tool:1 smeared:1 mit:1 generously:1 gaussian:11 aim:1 rather:4 shrinkage:1 varying:1 factorizes:1 wilson:1 l0:3 methodological:1 rank:1 likelihood:11 indicates:1 tech:1 seeger:6 contrast:3 sense:2 posteriori:1 inference:6 dependent:2 spurious:1 germany:1 classification:1 flexible:1 constrained:1 spatial:1 field:7 equal:1 having:1 sampling:2 identical:1 represents:1 peaked:1 future:1 stimulus:29 piecewise:2 employ:2 sta:5 divergence:1 replaced:1 ourselves:1 organization:1 investigate:1 edge:1 plotted:2 causal:2 tsuda:1 fitted:2 increased:1 column:1 modeling:3 dev:3 deviation:2 entry:1 uniform:1 reported:1 dependency:1 spatiotemporal:2 eur:1 st:3 density:1 winther:3 stay:1 probabilistic:1 off:2 decoding:1 bethge:1 connectivity:1 central:2 recorded:2 possibly:2 external:2 macke:1 li:3 checkerboard:1 account:4 converted:1 de:2 attaining:1 retinal:6 schramm:1 summarized:1 wk:1 includes:1 coefficient:2 coding:1 explicitly:1 linked:2 tab:1 red:4 recover:1 ass:1 ni:3 variance:6 who:1 miller:2 correspond:2 identify:2 ensemble:5 bayesian:9 identification:1 iterated:1 critically:1 basically:1 rectified:1 cybernetics:2 history:18 explain:1 detector:1 sebastian:1 ed:1 frequency:2 pp:1 minka:3 naturally:1 di:4 conciseness:1 static:2 sampled:1 dataset:1 treatment:1 knowledge:1 dimensionality:1 organized:1 response:13 done:1 evaluated:1 strongly:5 furthermore:2 contoured:1 until:1 glms:1 correlation:2 ei:2 nonlinear:9 propagation:7 indicated:1 grows:1 effect:5 consisted:1 verify:1 brown:1 regularization:3 assigned:2 excluded:1 q0:1 leibler:1 white:1 spemannstrasse:1 width:1 self:2 during:1 noted:1 excitation:1 m:6 generalized:6 complete:4 demonstrate:1 occuring:1 image:1 meaning:1 instantaneous:1 fi:3 superior:1 common:1 behaves:1 functional:6 spiking:10 rust:7 refractory:2 extend:1 interpretation:3 functionally:1 measurement:1 significant:7 refer:1 ai:1 rd:1 nonlinearity:3 stable:1 cortex:3 posterior:23 own:2 selectivity:1 gerwinn:1 rep:1 neg:1 seen:3 additional:1 determine:6 period:2 signal:1 smoother:1 full:5 unimodal:1 simoncelli:4 long:1 equally:2 bigger:1 laplacian:2 prediction:5 regression:3 expectation:7 poisson:3 iteration:1 represent:1 cell:24 addition:1 whereas:1 interval:8 biased:1 touryan:2 exhibited:1 recording:4 seperately:1 hz:1 facilitates:3 revealed:1 fit:3 isolation:1 architecture:2 restrict:1 lasso:1 whether:2 unther:1 movshon:1 hessian:1 elliptically:1 useful:2 detailed:1 eigenvectors:4 amount:3 differentiability:1 inhibitory:4 neuroscience:2 estimated:2 tibshirani:2 hirase:1 blue:2 write:1 shall:1 vol:1 snyder:2 nevertheless:1 changing:1 neither:1 v1:1 merely:1 sum:1 uncertainty:4 family:1 decide:1 scaling:2 pushed:1 distinguish:1 quadratic:3 chornoboy:2 activity:4 strength:1 binned:2 precisely:2 sharply:1 normalizable:1 speed:1 argument:1 according:2 combination:1 instantiates:1 describes:1 smaller:1 wi:7 making:1 modification:1 b:1 lau:1 glm:24 equation:2 describing:2 turn:1 mechanism:2 needed:2 tractable:1 end:1 available:1 changepoints:3 apply:2 observe:1 blowfly:1 alternative:2 convolved:1 original:1 denotes:1 top:1 include:1 cf:1 assembly:1 uj:2 society:2 spike:42 occurs:1 receptive:6 primary:3 dependence:7 usual:1 diagonal:5 bialek:2 exhibit:1 gradient:1 subspace:8 separate:2 thank:1 simulated:1 tuebingen:3 dragoi:1 index:1 relationship:1 providing:1 minimizing:1 negative:2 rise:1 design:1 perform:2 upper:1 neuron:54 datasets:1 descent:1 spiketriggered:1 situation:2 frame:2 jakob:1 arbitrary:1 inferred:1 namely:1 required:3 pair:1 csicsvari:1 continous:1 connection:2 chichilnisky:1 california:1 errorbars:1 macaque:3 poissonian:1 able:2 suggested:1 address:1 usually:1 pattern:1 firstname:1 gazzaniga:1 sparsity:6 max:1 royal:2 explanation:1 event:1 critical:1 natural:1 indicator:1 scheme:1 axis:3 carried:1 coupled:2 unnormalised:1 text:1 prior:30 acknowledgement:1 determining:1 generation:2 proportional:1 masland:1 proven:1 filtering:1 validation:1 xiao:1 row:1 excitatory:3 summary:1 last:1 rasmussen:2 free:1 allow:1 karr:1 institute:1 fall:1 explaining:2 steinke:1 absolute:1 sparse:7 anesthetized:1 feedback:1 opper:3 calculated:1 evaluating:1 pillow:4 dimension:1 computes:1 sensory:2 concavity:1 commonly:1 cortical:1 counted:1 far:1 approximate:4 cavity:1 spatio:1 alternatively:1 spectrum:2 table:1 nature:1 transfer:5 brisk:1 expansion:1 complex:6 stc:5 sp:4 significance:4 linearly:2 neurosci:2 whole:1 noise:1 hyperparameters:1 allowed:1 site:4 periment:1 fig:7 explicit:3 exponential:1 down:1 xt:7 specific:1 showing:1 essential:2 intractable:1 quantization:1 hxt:1 effectively:1 push:1 suited:2 paninski:4 ganglion:6 visual:5 ordered:2 springer:2 environmental:1 determines:1 harris:2 extracted:1 goal:2 consequently:1 towards:1 change:4 included:1 determined:3 typical:1 averaging:1 conservative:1 called:2 experimental:1 indicating:1 select:1 uzzell:1 support:1 people:1 arises:1 incorporate:2 tested:1 |
2,538 | 3,301 | Experience-Guided Search:
A Theory of Attentional Control
Michael C. Mozer
Department of Computer Science and
Institute of Cognitive Science
University of Colorado
mozer@colorado.edu
David Baldwin
Department of Computer Science
Indiana University
Bloomington, IN 47405
baldwind@indiana.edu
Abstract
People perform a remarkable range of tasks that require search of the visual environment for a target item among distractors. The Guided Search model (Wolfe,
1994, 2007), or GS, is perhaps the best developed psychological account of human visual search. To prioritize search, GS assigns saliency to locations in the
visual field. Saliency is a linear combination of activations from retinotopic maps
representing primitive visual features. GS includes heuristics for setting the gain
coefficient associated with each map. Variants of GS have formalized the notion
of optimization as a principle of attentional control (e.g., Baldwin & Mozer, 2006;
Cave, 1999; Navalpakkam & Itti, 2006; Rao et al., 2002), but every GS-like model
must be ?dumbed down? to match human data, e.g., by corrupting the saliency map
with noise and by imposing arbitrary restrictions on gain modulation. We propose
a principled probabilistic formulation of GS, called Experience-Guided Search
(EGS), based on a generative model of the environment that makes three claims:
(1) Feature detectors produce Poisson spike trains whose rates are conditioned on
feature type and whether the feature belongs to a target or distractor; (2) the environment and/or task is nonstationary and can change over a sequence of trials;
and (3) a prior specifies that features are more likely to be present for target than
for distractors. Through experience, EGS infers latent environment variables that
determine the gains for guiding search. Control is thus cast as probabilistic inference, not optimization. We show that EGS can replicate a range of human data
from visual search, including data that GS does not address.
1
Introduction
Human visual activity often involves search. We search for our keys on a cluttered desk, a face in
a crowd, an exit sign on the highway, a key paragraph in a paper, our favorite brand of cereal at
the supermarket, etc. The flexibility of the human visual system stems from the endogenous (or
internal) control of attention, which allows for processing resources to be directed to task-relevant
regions and objects in the visual field. How is attention directed based on an individual?s goals? To
what sort of features of the visual environment can attention be directed? These two questions are
central to the study of how humans explore their environment.
Visual search has traditionally been studied in the laboratory using cluttered stimulus displays containing artificial objects. The objects are defined by a set of primitive visual features, such as
color, shape, and size. For example, an experimental task might be to search for a red vertical line
segment?the target?among green verticals and red horizontals?the distractors. Performance is
typically evaluated as the response latency to detect the presence or absence of a target with high
accuracy. The efficiency of visual search is often characterized by the search slope?the increase
1
visual
image
horizontal
vertical
green
noise
process
top-down
gains
primitive-feature
contrast maps
red
Figure 1: The architecture of Guided
Search
saliency
map
bottom-up activations
+
attentional
state
attentional
selection
in response latency with each additional distractor in the display. An inefficient search can often
require an additional 25?35 ms/item (or more, if eye movements are required).
Many computational models of visual search have been proposed to explain data from the burgeoning experimental literature (e.g., Baldwin & Mozer, 2006; Cave, 1999; Itti & Koch, 2001; Mozer,
1991; Navalpakkam & Itti, 2006; Sandon, 1990; Wolfe, 1994). Despite differences in their details,
they share central assumptions, perhaps most plainly emphasized by Wolfe?s (1994) Guided Search
2.0 (GS) model. We describe the central assumptions of GS, taking some liberty in ignoring details
and complications of GS that obfuscate the similarities within this class of models and that are not
essential for the purpose of this paper.1
As depicted Figure 1, GS posits that primitive visual features are detected across the retina in parallel
along dimensions such as color and orientation, yielding a set of feature activity maps. Feature
activations are scalars in [0, 1]. The feature maps represent each dimension via a coarse coding.
That is, the maps for a particular dimension are highly overlapping and broadly tuned. For example,
color might be represented by maps tuned to red, green, blue, and yellow; orientation might be
represented by maps tuned to left, right, steep, and shallow-sloped edges. The feature activity maps
are passed through a differencing mechanism that enhances local contrast and texture discontinuities,
yielding a bottom-up activation.
The bottom-up activations from all feature maps are combined to form a saliency map in which
activation at a location indicates the priority of that location for the task at hand. Attention is directed
to locations in order from most salient to least, and the object at each location is identified. GS
supposes that response latency is linear in the number of locations that need to be searched before
a target is found. (The model includes rules for terminating search if no target is found after a
reasonable amount of effort.)
Consider the task of searching for a red vertical bar among green vertical bars and red horizontal
bars. Ideally, attention should be drawn to red and vertical bars, not to green or horizontal bar. To
allow for guidance of attention, GS posits that a weight or top-down gain is associated with each
feature map, and the contribution of given feature map to the saliency map is scaled by the gain. It
is the responsibility of control processes to determining gains that emphasize task-relevant features.
Although gain modulation is a sensible means of implementing goal-directed action, it yields behavior than is more efficient than people appear to be. To elaborate, consider again the task of
searching for a red vertical. If the gains on the red and vertical maps are set to 1, and the gains on
green and horizontal maps are set to 0, then a target (red vertical) will have two units of activation
in the saliency map, whereas each distractor (red horizontal or green vertical) will have only one
unit of activation. Because the target is the most salient item and GS assumes that response time is
monotonically related to the saliency ranking of the target, the target should be located quickly, in a
time independent of the number of distractors. In contrast, human response times increase linearly
with the number of distractors in conjunction search.
To reduce search efficiency, GS assumes noise corruption of the saliency map. In the case of GS, the
signal-to-noise ratio is roughly 2:1. Baldwin and Mozer (2006) also require noise corruption for the
same reason, although the corruption is to the low-level feature representation not the saliency map.
Although Navalpakkam and Itti (2006) do not explicitly introduce noise in their model, they do so
implicitly via a selection rule that the probability of attending an item is proportional to its saliency.
To further reduce search efficiency, GS includes a complex set of rules that limit gain control. For example, gain modulation is allowed for only one feature map per dimension. Other attentional models
1
Although Guided Search has undergone refinement (Wolfe, 2007), the key claims summarized here are
unchanged. Recent extensions to GS consider eye movements and acuity changes with retinal eccentricity.
2
place similar, somewhat
P arbitrary limitations on gain modulation. Baldwin and Mozer (2006) impose the restriction i |gi ? 1| < c, where gi isP
the gain of feature map i and c is a constant.
Navalpakkam and Itti (2006) prefer the constraints i gi = c and gi > 0.
Finally, we mention one other key property the various models have in common. Gain tuning is
cast as an optimization problem: the goal of the model is to adjust the gains so as to maximize the
target saliency relative to distractor saliency for the task at hand. Baldwin and Mozer (2006) define
the criterion in terms of the target saliency ranking. Navalpakkam and Itti (2006) use the expected
target to distractor saliency ratio. Wolfe (1994) sets gains according to rules that he describes as
performing optimization.
2
Experience-Guided Search
The model we introduce in this paper makes three contributions over the class of Guided Search
models previously proposed. (1) GS uses noise or nondeterminism to match human data. In reality,
noise and nondeterminism serve to degrade the model?s performance over what it could otherwise
be. In contrast, all components of our model are justified on computational grounds, leading to a
more elegant, principled account. (2) GS imposes arbitrary limitations on gain modulation that also
result in the model performing worse than it otherwise could. Although limitations on gain modulation might be neurobiologically rationalized, a more elegant account would characterize these
limitations in terms of trade offs: constraints on gain modulation may limit performance, but they
yield certain benefits. Our model offers such a trade-off account. (3) In GS, attentional control is
achieved by tuning gains to optimize performance. In contrast, our model is designed to infer the
structure of its environment through experience, and gain modulation is a byproduct of this inference. Consequently, our model has no distinct control mechanism, leading to a novel perspective on
executive control processes in the brain.
Our approach begins with the premise that attention is fundamentally task based: a location in the
visual field is salient if a target is likely at that location. We define saliency as the target probability,
P (Tx = 1|Fx ), where Fx is the local feature activity vector at retinal location x and Tx is a binary
random variable indicating if location x contains a target. Torralba et al. (2006) and Zhang and
Cottrell (submitted) have also suggested that saliency should reflect target probability, although they
propose approaches to computing the target probability very different from ours. Our approach is to
compute the target probability using statistics obtained from recent experience performing the task.
Consequently, we refer to our model as experience-guided search or EGS.
To expand P (Tx |Fx ), we make the naive-Bayes assumption that the feature activities are independent of one another, yielding
Q
P1
Q
P (Tx |Fx , ?) = P (Tx ) i P (Fxi |Tx , ?)/ t=0 P (Tx = t) i P (Fxi |Tx = t, ?),
(1)
where ? is a vector of parameters that characterize the current stimulus environment in the current
task, and Fxi encodes the activity of feature i. Consider Fxi to be a rate-coded representation of
a neural spike train. Specifically, Fxi denotes the count of the number of spikes that occurred in a
window of n time intervals, where at most one spike can occur in each interval.
We propose a generative model of the environment in which Fxi is a binomial random variable,
Fxi |{Tx = t, ?} ? Binomial(?it , n), where a spike rate ?it is associated with feature i for target
(t = 1) and nontarget (t = 0) items. As n becomes large?i.e., the spike count is obtained over
a larger period of time?the binomial is well approximated by a Gaussian: Fxi |{Tx = t, ?} ?
N (n?it , n?it (1 ? ?it )). Using the Gaussian approximation, Equation 1 can be rewritten in the form
n
of a logistic function: P (Tx |Fx , ?) = 1/(1 + e?(rx + 2 sx ) ), where
1
XX
P (Tx = 1)
1X
?i1 (1 ? ?i1 )
1 ? 2t
rx = ln
?
ln
and sx =
(f?xi ??it )2 (2)
P (Tx = 0)
2 i
?i0 (1 ? ?i0 )
?
(1
?
?
)
it
it
i t=0
and f?xi = fxi /n denotes the observed spike rate for a feature detector.
Because of the logistic relationship, P (Tx |Fx , ?) is monotonic in rx + n2 sx . Consequently, if attentional priority is given to locations in order of their target probability, P (Tx |Fx , ?), then it is
3
equivalent to rank using rx + n2 sx . Further, if we assume that the target is equally likely in any
location, then rx is constant across locations, and sx can substitute for P (Tx |Fx , ?) as an equivalent
measure of saliency.
This saliency measure, sx , makes intuitive sense. Saliency at a location increases if feature i?s
activity is distant from the mean activity observed in the past for a distractor (?i0 ) and decreases if
feature i?s activity is distant from the mean activity observed in the past for a target (?i1 ). These
saliency increases (decreases) are scaled by the variance of the distractor (target) activities, such that
high-variance features have less impact on saliency.
Expanding the numerator terms in the definition of sx (Equation 2), we observe that sx can be written
2
as a linear combination of terms involving the feature activities, f?xi , and the squared activities, f?xi
(along with a constant term that can be ignored for ranking by saliency).
The
saliency
measure
P
sx in EGS is thus quite similar to the saliency measure in GS, sGS
= i gi f?xi . The differences
x
are: first, EGS incorporates quadratic terms, and second, gain coefficients of EGS are not free
parameters but are derived from statistics of targets and distractors in the current task and stimulus
environment. In this fact lies the virtue of EGS relative to GS: The control parameters are obtained
not by optimization, but are derived directly from statistics of the environment.
2.1
Uncertainty in the Environment Statistics
The model parameters, ?, could be maximum likelihood estimates obtained by observing target
and distractor activations over a series of trials. That is, suppose that each item in the display is
identified as a target or distractor. The set of activations of feature i at all locations containing a
target could be used to estimate ?i1 , and likewise with locations containing a distractor to estimate
?i0 . Alternatively, one could adopt a Bayesian approach and treat ?it as a random variable, whose
uncertainty is reduced by the evidence obtained on each trial. Because feature spike rates lie in [0, 1],
we define ?it as a beta random variable, ?it ? Beta(?it , ?it ).
0
This Bayesian approach also allows us to specify priors over ?it in terms of imaginary counts, ?it
0
and ?it . For example, in the absence of any task experience, a conservative assumption is that all
feature activations are predictive of a target, i.e., ?i1 should be drawn from a distribution biased
toward 1, and ?i0 should be drawn from a distribution biased toward 0.
To
R compute the target probabilities, we must marginalize over ?, i.e., P (Tx |Fx ) =
P (Tx |Fx , ?)p(?)d?. Unfortunately, this integral is impossible to evaluate analytically. We in?
stead compute the expectation of sx over ?, s?x ? E? (sx ), which has the solution
s?x =
1
XX
i
(?it + ?it ? 1)(?it + ?it ? 2) ?2
2(?it + ?it ? 1) ?
?it
(1 ? 2t)
fxi ?
fxi +
(?it ? 1)(?it ? 1)
?it ? 1
?it ? 1
t=0
(3)
Note that, like the expression for sx , s?x is a weighted sum of linear and quadratic feature-activity
terms. When ?it and ?it are large, the distribution of ?it is sharply peaked, and s?x approaches sx
with ?it = ?it /(?it + ?it ). When this condition is satisfied, ranking by s?x is equivalent to ranking
by P (Tx |Fx ). Although the equivalence is not guaranteed for smaller ?it and ?it , we have found
the equivalence to hold in empirical tests. Indeed, in our simulations, we find that defining saliency
as either sx or s?x yields similar results, reinforcing the robustness of our approach.
2.2
Modeling the Stimulus Environment
The parameter vectors ? and ? maintain a model of the stimulus environment in the context of the
current task. Following each trial, these parameters must be updated to reflect the statistics of the
trial. We assume that following a trial, each item in the display has been identified as either a target
or distractor. (All other adaptive attention models such as GS make this assumption.)
Consider a location x that has been labeled as type t (1 for target, 0 for distractor), and some feature
i at that location, Fxi . We earlier characterized Fxi as a binomial random variable reflecting a
spike count; that is, during n time intervals, fxi spikes are observed. Each time interval provides
evidence as to the value ?it . Given prior distribution ?it ? Beta(?it , ?it ), the posterior is ?it |Fxi ?
Beta(?it + fxi , ?it + n ? fxi ). However, to limit the evidence provided from each item, we scale it
4
by a factor of n. When all locations are considered, the resulting posterior is:
P
P
?it |Fi ? Beta ?it + x??t f?xi , ?it + x??t 1 ? f?xi
(4)
where Fi is feature map i and ?t is the set of locations containing elements of type t.
With the approach we?ve described, evidence concerning the value of ?it accumulates over a sequence of trials. However, if an environment is nonstationary, this build up of evidence is not
adaptive. We thus consider a switching model of the environment that specifies with probability ?,
the environment changes and all evidence should be discarded. The consequence of this assumption
0
0
is that the posterior distribution is a mixture of Equation 4 and the prior distribution, Beta(?it
, ?it
).
Modeling the mixture distribution is problematic because the number of mixture components grows
linearly with the number of trials. We could approximate the mixture distribution by the beta distribution that best approximates the mixture, in the sense of Kullback-Leibler divergence. However,
we chose to adopt a simpler, more intuitive solution: to interpolate between the two distributions.
The update rule we use is therefore
"
#
"
#!
X
X
0
0
?
?
?it |Fi ? Beta ?? + (1 ? ?) ?it +
fxi , ?? + (1 ? ?) ?it +
1 ? fxi
. (5)
it
it
x??t
3
x??t
Simulation Methodology
We present a step-by-step description of how the model runs to simulate experimental subjects performing a visual search task. We start by generating a sequence of experimental trials with the
0
0
properties studied in an experiment. The model is initialized with ?it = ?it
and ?it = ?it
. On each
simulation trial, the following sequence occurs. (1) Feature extraction is performed on the display
to obtain firing rates, f?xi for each location x and feature type i. (2) Saliency, s?x , is computed for
each location according to Equation 3. (3) The saliency rank of each location is assessed, and the
number of locations that need to be searched in order to identify the target is assumed to be equal to
the target rank. Response time should then be linear in target rank. (4) Following each trial, target
and distractor statistics, ?it and ?it , are updated according to Equation 5.
0
0
EGS has potentially many free parameters: {?it
} and {?i1
}, and ?. However, with no reason to
believe that one feature behaves differently than another, we assign all the features the same priors.
0
0
0
0
Further, we impose symmetry such that ?i0
= ?j1
= ? and ?i1
= ?j0
= ? for all i and j, reducing
the total number of free parameters to three.
Because we are focused on the issue of attentional control, we wanted to sidestep other issues, such
as feature extraction. Consequently, EGS uses the front-end preprocessing of GS. GS takes as input
an 8 ? 8 array of locations, each of which can contain a single colored bar. As described earlier, GS
analyzes the input via four broadly tuned features for color, and four for orientation. After a local
contrast-enhancement operator, GS yields activation values in [0, 1] at each of 8 ? 8 locations for
each of eight feature dimensions. We treat the activation produced by GS for feature i at location
x as the firing rate f?xi needed to simulate EGS. Like GS, the response time of EGS is linear in the
target ranking. A scaling factor is required to convert rank to response time; we chose 25 msec/item,
which is a fourth free parameter of GS.
4
Results
We simulated EGS on a series of tasks that Wolfe (1994) used to evaluate GS. Because GS is limited
to processing displays containing colored, oriented lines, some of the tasks constructed by Wolfe did
not have an exact correspondence in the experimental literature. Rather, Wolfe, the leading expert
in visual search, identified key findings that he wanted GS to replicate. Because EGS shares frontend processing with GS, EGS is limited to the same set of tasks as GS. Consequently, we present a
comparison of GS and EGS.
We began by replicating Wolfe?s results on GS. This replication was nontrivial, because GS contains
many parameters, rules, and special cases, and published descriptions of GS do not provide a crisp
5
algorithmic description of the model. To implement EGS, we simply removed much of the complexity of GS?including the distinction between bottom-up and top-down weights, heuristics for
setting the weights, and the injection of high-amplitude noise into the saliency map?and replaced
it with Equations 3 and 5.
Each simulation begins with a sequence of 100 practice trials, followed by a sequence of 1000 trials
for each blocked condition. Displays on each trial are generated according to the constraints of
the task with random variation with respect to unconstrained aspects of the task (e.g., locations of
display elements, distractor identities, etc.). In typical search tasks, the participant is asked to detect
the presence or absence of a target. We omit results for target-absent trials, since GS and EGS make
identical predictions for these trials.
The qualitative performance of EGS does not depend on its free parameters when two conditions
are met: ? > 0 and ? > ?. The latter condition yields E[?i1 ] > E[?i0 ] for all i, and corresponds
to the bias that features are more likely to be present for a target than for a distractor. This bias is
rational in order to prevent cognition from suppressing information that could potentially be critical
to behavior. All simulation results reported here used ? = 0.3, ? = 25, and ? = 10.
Figure 2 shows simulation results on six sets of tasks, labeled A?F. The first and third columns (thin
lines) are data from our replication of GS; the second and fourth columns (thick lines) are data from
our implementation of EGS. The key feature to note is that results from EGS are qualitatively and
quantitatively similar to results from GS. As should become clear when we explain the individual
tasks, EGS probably produces a better qualitative fit to the human data. (Unfortunately, it is not
feasible to place the human data side-by-side with the simulation results. Although the six sets of
tasks were chosen by Wolfe to represent key experiments in the literature, most are abstractions
of the original experimental tasks because the retina of GS?and its descendent EGS?is greatly
simplified and cannot accommodate the stimulus arrays used in human studies. Thus, Wolfe never
intended to quantitatively model specific experimental studies.)
We briefly describe the six tasks. The first four involve displays of a homogeneous color, and search
for a target orientation among distractors of different orientations. Task A explores search for a
vertical (defined as 0? ) target among homogeneous distractors of a different orientation. The graph
plots the slope of the line relating display size to response latency, as a function of the distractor
orientation. Search slopes become more efficient as the target-distractor similarity decreases. Task
B explores search for a target among two types of distractors as a function of display size. The
distractors are 100? apart, and the target is 40? and 60? from the distractors, but in one case the
target differs from the distractors in that it is the only nearly vertical item, allowing pop out via the
vertical feature detector. Note that pop out is not wired into EGS, but emerges because EGS identifies
vertical-feature activity as a reliable predictor of the target. Task C examines search efficiency for
a target among heterogeneous distractors, for two target orientations and two degrees of targetdistractor similarity. Search is more efficient when the target and distractors are dissimilar. (EGS
obtains results better matched to the human data than GS.) Task D explores an asymmetry in search:
it is more efficient to find a tilted bar among verticals than a vertical among tilted. This effect arises
from the same mechanism that yielded efficient search in task B: a unique feature is highly activated
when the target is tilted but not when it is vertical. And search is better guided to features that are
present than to features that are absent in EGS, due to the ? priors. Task E involves conjunction
search. The target is a red vertical among green vertical and red tilted distractors. The red item?s
tilt can be either 90? (i.e., horizontal) or 40? . Both distractor environments yield inefficient search,
but?consistent with human data?conjunction searches can vary in their relative difficulty.
Task F examines search efficiency for a red vertical among red 60? and yellow vertical distractors,
as a function of the ratio of the two distractor types. The result shows that search can be guided:
response times become faster as either the target color or target orientation becomes sparse, because
a relatively unique feature serves as a reliable cue to the target. Figure 3a depicts how EGS adapts
differently for the extreme conditions in which the distractors are mostly vertical (dark bars) or
mostly red (light bars). The bars represent E[?i0 ]; the lower the value, the more a feature is viewed as
reliably discriminating targets and distractors. (E[?i1 ] is independent of the experimental condition.)
When distractors are mostly vertical, the red feature is a better cue, and vice versa. The standard
explanation for this phenomenon in the psychological literature is that subjects operate in two stages,
first filtering out based on the more discriminative feature, and then serially searching the remaining
6
(D)
Feature Search Asymmetry
5
0
10
5
0
1000
800
600
400
10 20 30 40 50
Distractor Orientation
10 20 30 40 50
Distractor Orientation
1200
RT (msec)
10
1200
15
10
400
800
T: 10?; D: ?30?; 70?
600
T: 20?; D: ?20?; 80?
T: 10?; D: ?50?; 50?
400
10
20 30 40
Display Size
10
800
600
20 30 40
Display Size
10
400
8
12
Display Size
1000
T: 0?; D: ?40?; 40?
600
500
T: 20?; D: 0?; 40?
T: 20?; D: ?20?; 60?
400
4
RT (msec)
RT (msec)
RT (msec)
500
10
20 30
40
Display Size
800
T: 0? R; D: 0? G; 40? R
600
20 30 40
Display Size
T: 0? R; D: 0? G; 90? R
10
20 30 40
Display Size
T: 0?; D: ?20?; 20?
700
600
T: 20?; D: 0?
(F) Conjunction Search
Varying Distractor Ratio
(C)
Target?Distractor Similarity
700
600
1000
RT (msec)
600
1000
RT (msec)
RT (msec)
RT (msec)
1000
800
20 30
40
Display Size
T: 0?; D: 20?
800
(E) Conjunction Search
Varying Distractor Confusability
(B)
Categorical Search
1000
1000
400
4
8
12
Display Size
800
RT (msec)
15
RT (msec)
RT Slope (msec/item)
RT Slope (msec/item)
(A) Vertical Bar Among
Homogeneous Distractors
600
0
.25 .50 .75
1
Proportion of Red Distractors
1000
800
600
0
.25 .50 .75
1
Proportion of Red Distractors
Figure 2: Simulation results on six sets of tasks, labeled A?F, for GS (thin lines, 1st and 3d columns)
and EGS (thick lines, 2nd and 4th columns). Simulation details are explained in the text.
items. EGS provides a single-stage account that does not need to invoke specialized mechanisms for
adaptation to the environment, because all attentional control is adaptation of this sort.
To summarize, EGS predicts the key factors in visual search that determine search efficiency. Most
efficient search is for a target defined by the presence of a single categorical feature among homogeneous distractors that do not share the categorical feature. Least efficient search is for target and
distractors that share features (e.g., T among L?s, or red verticals among red horizontals and green
verticals) and/or when distractors are heterogeneous.
Wolfe, Cave, & Franzel (1989) conducted an experiment to demonstrate that people can benefit
from guidance. This experiment, which oddly has never been modeled by GS, involves search for a
conjunction target defined by a triple of features, e.g., a big red vertical bar. The target might be presented among heterogeneous distractors that share two features with it, such as a big red horizontal
bar, or distractors that share only one feature with it, such as a small green vertical bar. Performance
in these two conditions, denoted T3-D2 and T3-D1, respectively, is compared to performance in
a standard conjunction search task, denoted T2-D1, involving targets defined by two features and
sharing one feature with each distractor. Wolfe et al. reasoned that if search can be guided, saliency
at a location should be proportional to the number of target-relevant features at that location, and the
ratio of target to distractor salience should be x/y in condition Tx-Dy. Because x > y, the target is
always more salient than any distractor, but GS assumes less efficient search due to noise corruption
of the saliency map, thereby predicting search slopes that are inversely related to x/y. The human
data show exactly this pattern, producing almost flat search slopes for T3-D1. EGS replicates the
human data (Figure 3b) without employing GS?s arbitrary assumption that prioritization is corrupted
by noise. Instead, x/y reflects the amount of evidence available on each trial about features that discriminate targets from distractors. Essentially, EGS suggests that x/y determines the availability of
discriminative statistics in the environment. Thus, the limitation is on learning, not on performance.
7
5
0.2
(a)
mostly vert distractors
mostly red distractors
0.15
0.1
0.05
0
vertical
Feature
1000
T3?D2
T2?D1
T3?D1
800
600
400
0
red
(b)
1200
Reaction Time
0.25
Activation
Figure 3: (a) Values
of E[?i0 ] in task F. (b)
EGS performance on
the
triple-conjunction
task of Wolfe, Cave, &
Franzel (1989)
10
20
Display Size
30
40
Discussion
We presented a model, EGS, that guides visual search via statistics collected over the course of
experience in a task environment. The primary contributions of EGS are as follows. First, EGS is a
significantly more elegant and parsimonious theory than its predecessors. In contrast to EGS, GS is a
complex model under the hood with many free parameters and heuristic assumptions. We and other
groups have spent many months reverse engineering GS to determine how exactly it works, because
published descriptions do not have the specificity of an algorithm. Second, to explain human data,
GS and its ancestors are ?retarded? by injecting noise or arbitrarily limiting gains. Although it may
ultimately be determined that the brain suffers from these conditions, one would prefer theories
that cast performance of the brain as ideal or rational. EGS achieves this objective via explicit
assumptions about the generative model of the environment embodied by cognition. In particular,
the dumbing-down of GS and its variants is replaced in EGS by the claim that environments are
nonstationary. If the environment can change from one trial to the next, the cognitive system does
well not to turn up gains on one feature dimension at the expense of other feature dimensions. The
result is a sensible trade off: attentional control can be rapidly tuned as the task or environment
changes, but this flexibility restricts EGS?s search efficiency when the task and environment remain
constant. Third, EGS suggests a novel perspective on attentional control, and executive control more
generally. All other modern perspectives we are aware of treat control as optimization, whereas in
EGS, control arises directly from statistical inference on the task environment. Our current research
is exploring the implications of this intriguing perspective.
Acknowledgments
This research was supported by NSF BCS 0339103 and NSF CSE-SMA 0509521. Support for the second
author comes from an NSF Graduate Fellowship.
References
Baldwin, D., & Mozer, M. C. (2006). Controlling attention with noise: The cue-combination model of visual
search. In R. Sun & N. Miyake (Eds.), Proc. of the 28th Ann. Conf. of the Cog. Sci. Society (pp. 42-47).
Hillsdale, NJ: Erlbaum.
Cave, K. R. (1999). The FeatureGate model of visual selection. Psychol. Res., 62, 182?194.
Itti, L., & Koch, C. (2001). Computational modeling of visual attention. Nature Rev. Neurosci., 2, 194?203.
Mozer, M. C. (1991). The perception of multiple objects: A connectionist approach. Cambridge, MA: MIT.
Navalpakkam, V., & Itti, L. (2006). Optimal cue selection strategy. In Advances in Neural Information Processing Systems Vol. 19 (pp. 1-8). Cambridge, MA: MIT Press.
Rao, R., Zelinsky, G., Hayhoe, M., & Ballard, D. (2002). Eye movements in iconic visual search. Vis. Res., 42,
1447?1463.
Sandon, P. A. (1990). Simulating visual attention. Journal of Cog. Neuro., 2, 213?231. Sandon, 1990
Torralba, A., Oliva, A., Castelhano, M.S., & Henderson, J. M. (2006). Contextual guidance of eye movements
and attention in real-world scenes: The role of global features on objects search. Psych. Rev., 113, 766?786.
Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration
model for visual search. Jnl. Exp. Psych.: Hum. Percep. & Perform., 15, 419?433.
Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search. Psych. Bull. & Rev., 1, 202?238.
Wolfe, J. M. (2007). Guided Search 4.0: Current progress with a model of visual search. In. W. Gray (Ed.),
Integrated Models of Cognitive Systems. NY: Oxford.
Zhang, L., & Cottrell, G. W. (submitted). Probabilistic search: A new theory of visual search. Submitted for
publication.
8
| 3301 |@word trial:18 briefly:1 proportion:2 replicate:2 nd:1 d2:2 simulation:9 mention:1 thereby:1 accommodate:1 contains:2 series:2 tuned:5 ours:1 suppressing:1 past:2 imaginary:1 reaction:1 current:6 contextual:1 percep:1 activation:14 intriguing:1 must:3 written:1 cottrell:2 distant:2 tilted:4 j1:1 shape:1 wanted:2 designed:1 plot:1 update:1 generative:3 cue:4 item:14 colored:2 coarse:1 provides:2 complication:1 location:30 cse:1 simpler:1 zhang:2 along:2 constructed:1 beta:8 become:3 predecessor:1 replication:2 qualitative:2 paragraph:1 introduce:2 indeed:1 expected:1 roughly:1 p1:1 behavior:2 distractor:27 brain:3 window:1 becomes:2 begin:2 retinotopic:1 xx:2 provided:1 matched:1 what:2 psych:3 developed:1 finding:1 indiana:2 nj:1 cave:6 every:1 exactly:2 scaled:2 control:17 unit:2 omit:1 appear:1 producing:1 before:1 engineering:1 local:3 treat:3 limit:3 consequence:1 switching:1 despite:1 accumulates:1 oxford:1 firing:2 modulation:8 might:5 chose:2 studied:2 equivalence:2 suggests:2 limited:2 range:2 graduate:1 directed:5 unique:2 acknowledgment:1 hood:1 practice:1 implement:1 differs:1 j0:1 empirical:1 significantly:1 vert:1 specificity:1 cannot:1 marginalize:1 selection:4 operator:1 context:1 impossible:1 restriction:2 optimize:1 map:26 equivalent:3 crisp:1 primitive:4 attention:12 cluttered:2 focused:1 miyake:1 formalized:1 assigns:1 rule:6 attending:1 array:2 examines:2 searching:3 notion:1 traditionally:1 fx:11 variation:1 updated:2 franzel:3 target:67 suppose:1 colorado:2 controlling:1 exact:1 limiting:1 homogeneous:4 us:2 prioritization:1 wolfe:17 element:2 approximated:1 located:1 neurobiologically:1 predicts:1 labeled:3 baldwin:7 bottom:4 observed:4 role:1 region:1 sun:1 movement:4 trade:3 decrease:3 removed:1 principled:2 mozer:10 environment:27 complexity:1 ideally:1 asked:1 ultimately:1 terminating:1 depend:1 segment:1 predictive:1 serve:1 exit:1 efficiency:7 isp:1 differently:2 represented:2 various:1 tx:20 train:2 distinct:1 describe:2 artificial:1 detected:1 crowd:1 whose:2 heuristic:3 larger:1 quite:1 otherwise:2 stead:1 statistic:8 gi:5 sequence:6 nontarget:1 propose:3 adaptation:2 relevant:3 rapidly:1 flexibility:2 adapts:1 intuitive:2 description:4 enhancement:1 eccentricity:1 asymmetry:2 produce:2 generating:1 wired:1 object:6 spent:1 progress:1 involves:3 come:1 met:1 liberty:1 guided:15 posit:2 thick:2 human:16 implementing:1 hillsdale:1 require:3 premise:1 assign:1 extension:1 exploring:1 hold:1 koch:2 considered:1 ground:1 exp:1 algorithmic:1 cognition:2 claim:3 sma:1 achieves:1 vary:1 torralba:2 adopt:2 purpose:1 proc:1 injecting:1 highway:1 vice:1 weighted:1 reflects:1 offs:1 mit:2 gaussian:2 always:1 rather:1 varying:2 publication:1 conjunction:8 derived:2 acuity:1 iconic:1 rank:5 indicates:1 likelihood:1 greatly:1 contrast:7 detect:2 sense:2 inference:3 abstraction:1 i0:9 typically:1 integrated:1 ancestor:1 expand:1 i1:9 issue:2 among:16 orientation:11 denoted:2 special:1 integration:1 field:3 equal:1 never:2 extraction:2 reasoned:1 aware:1 identical:1 nearly:1 thin:2 peaked:1 t2:2 stimulus:6 fundamentally:1 quantitatively:2 connectionist:1 retina:2 modern:1 oriented:1 ve:1 divergence:1 individual:2 interpolate:1 replaced:2 intended:1 maintain:1 highly:2 adjust:1 replicates:1 henderson:1 mixture:5 extreme:1 yielding:3 light:1 activated:1 implication:1 edge:1 integral:1 byproduct:1 experience:9 initialized:1 re:2 guidance:3 psychological:2 column:4 rationalized:1 modeling:3 rao:2 earlier:2 bull:1 predictor:1 retarded:1 conducted:1 erlbaum:1 front:1 characterize:2 reported:1 supposes:1 corrupted:1 combined:1 st:1 explores:3 discriminating:1 probabilistic:3 off:2 invoke:1 michael:1 quickly:1 again:1 central:3 reflect:2 squared:1 containing:5 satisfied:1 prioritize:1 zelinsky:1 priority:2 worse:1 cognitive:3 conf:1 expert:1 inefficient:2 itti:8 leading:3 sidestep:1 castelhano:1 account:5 retinal:2 coding:1 summarized:1 includes:3 coefficient:2 availability:1 descendent:1 explicitly:1 ranking:6 vi:1 performed:1 endogenous:1 responsibility:1 observing:1 red:26 start:1 sort:2 bayes:1 parallel:1 participant:1 slope:7 contribution:3 accuracy:1 variance:2 likewise:1 yield:6 saliency:31 identify:1 t3:5 yellow:2 bayesian:2 produced:1 rx:5 corruption:4 published:2 submitted:3 detector:3 explain:3 suffers:1 sharing:1 ed:2 definition:1 pp:2 associated:3 gain:25 bloomington:1 rational:2 distractors:30 color:6 infers:1 emerges:1 amplitude:1 reflecting:1 methodology:1 response:10 specify:1 formulation:1 evaluated:1 stage:2 hand:2 horizontal:9 overlapping:1 logistic:2 perhaps:2 gray:1 grows:1 believe:1 effect:1 cereal:1 contain:1 analytically:1 laboratory:1 leibler:1 numerator:1 during:1 m:1 criterion:1 demonstrate:1 image:1 novel:2 fi:3 began:1 common:1 specialized:1 behaves:1 tilt:1 he:2 occurred:1 approximates:1 relating:1 refer:1 blocked:1 versa:1 imposing:1 cambridge:2 tuning:2 unconstrained:1 replicating:1 similarity:4 etc:2 posterior:3 recent:2 perspective:4 belongs:1 apart:1 reverse:1 certain:1 binary:1 arbitrarily:1 analyzes:1 additional:2 somewhat:1 impose:2 determine:3 maximize:1 period:1 monotonically:1 signal:1 multiple:1 bcs:1 infer:1 stem:1 match:2 characterized:2 faster:1 offer:1 concerning:1 equally:1 coded:1 impact:1 prediction:1 variant:2 involving:2 neuro:1 heterogeneous:3 essentially:1 expectation:1 poisson:1 oliva:1 represent:3 achieved:1 justified:1 whereas:2 fellowship:1 interval:4 biased:2 operate:1 probably:1 subject:2 elegant:3 incorporates:1 nonstationary:3 presence:3 ideal:1 fit:1 architecture:1 identified:4 reduce:2 absent:2 whether:1 expression:1 six:4 passed:1 effort:1 reinforcing:1 action:1 ignored:1 generally:1 latency:4 clear:1 involve:1 amount:2 dark:1 desk:1 reduced:1 specifies:2 restricts:1 problematic:1 nsf:3 sign:1 per:1 blue:1 broadly:2 vol:1 group:1 key:8 salient:4 four:3 burgeoning:1 drawn:3 prevent:1 graph:1 sum:1 convert:1 run:1 uncertainty:2 fourth:2 place:2 almost:1 reasonable:1 parsimonious:1 prefer:2 scaling:1 dy:1 guaranteed:1 followed:1 display:20 correspondence:1 quadratic:2 g:54 activity:15 nontrivial:1 yielded:1 occur:1 constraint:3 sharply:1 scene:1 flat:1 encodes:1 aspect:1 simulate:2 performing:4 injection:1 relatively:1 department:2 according:4 combination:3 across:2 describes:1 smaller:1 remain:1 shallow:1 rev:3 explained:1 ln:2 resource:1 equation:6 previously:1 turn:1 count:4 mechanism:4 needed:1 end:1 serf:1 available:1 rewritten:1 eight:1 observe:1 fxi:19 simulating:1 alternative:1 robustness:1 substitute:1 original:1 top:3 assumes:3 denotes:2 binomial:4 remaining:1 build:1 society:1 unchanged:1 objective:1 question:1 spike:10 occurs:1 strategy:1 primary:1 rt:12 hum:1 enhances:1 attentional:11 simulated:1 sci:1 sensible:2 degrade:1 collected:1 reason:2 toward:2 sloped:1 navalpakkam:6 relationship:1 modeled:1 ratio:5 differencing:1 steep:1 unfortunately:2 mostly:5 potentially:2 expense:1 implementation:1 reliably:1 perform:2 allowing:1 vertical:29 revised:1 discarded:1 defining:1 arbitrary:4 david:1 cast:3 required:2 sandon:3 distinction:1 pop:2 discontinuity:1 address:1 jnl:1 bar:14 suggested:1 hayhoe:1 pattern:1 perception:1 summarize:1 including:2 green:10 reliable:2 explanation:1 confusability:1 critical:1 difficulty:1 serially:1 predicting:1 representing:1 eye:4 inversely:1 identifies:1 categorical:3 psychol:1 naive:1 embodied:1 supermarket:1 text:1 prior:6 literature:4 sg:1 determining:1 relative:3 limitation:5 proportional:2 filtering:1 remarkable:1 triple:2 executive:2 degree:1 consistent:1 undergone:1 imposes:1 principle:1 corrupting:1 share:6 obfuscate:1 course:1 supported:1 free:6 salience:1 bias:2 allow:1 nondeterminism:2 side:2 institute:1 guide:1 oddly:1 face:1 taking:1 sparse:1 benefit:2 dimension:7 world:1 egs:43 author:1 qualitatively:1 refinement:1 adaptive:2 preprocessing:1 simplified:1 employing:1 approximate:1 emphasize:1 obtains:1 implicitly:1 kullback:1 global:1 assumed:1 xi:9 discriminative:2 alternatively:1 search:70 latent:1 reality:1 favorite:1 nature:1 ballard:1 expanding:1 ignoring:1 symmetry:1 complex:2 did:1 linearly:2 neurosci:1 big:2 noise:13 n2:2 allowed:1 elaborate:1 depicts:1 ny:1 guiding:1 msec:13 explicit:1 lie:2 third:2 down:5 cog:2 specific:1 emphasized:1 virtue:1 evidence:7 essential:1 frontend:1 texture:1 conditioned:1 sx:14 depicted:1 simply:1 likely:4 explore:1 visual:29 scalar:1 monotonic:1 corresponds:1 determines:1 ma:2 goal:3 identity:1 viewed:1 consequently:5 month:1 ann:1 absence:3 feasible:1 change:5 specifically:1 typical:1 reducing:1 determined:1 conservative:1 called:1 total:1 discriminate:1 experimental:8 brand:1 indicating:1 internal:1 people:3 searched:2 latter:1 arises:2 assessed:1 dissimilar:1 support:1 evaluate:2 d1:5 phenomenon:1 |
2,539 | 3,302 | Privacy-Preserving Belief Propagation and Sampling
Michael Kearns, Jinsong Tan, and Jennifer Wortman
Department of Computer and Information Science
University of Pennsylvania, Philadelphia, PA 19104
Abstract
We provide provably privacy-preserving versions of belief propagation, Gibbs
sampling, and other local algorithms ? distributed multiparty protocols in which
each party or vertex learns only its final local value, and absolutely nothing else.
1
Introduction
In this paper we provide provably privacy-preserving versions of belief propagation, Gibbs sampling, and other local message-passing algorithms on large distributed networks. Consider a network
of human social contacts, and imagine that each party would like to compute or estimate their probability of having contracted a contagious disease, which depends on the exposures to the disease of
their immediate neighbors in the network. If network participants (or their proxy algorithms) engage
in standard belief propagation, each party would learn their probability of exposure conditioned on
any evidence ? and a great deal more, including information about the exposure probabilities of
their neighbors. Obviously such leakage of non-local information is highly undesirable in settings
where we regard each party in the network as a self-interested agent, and privacy is paramount. Other
examples include inference problems in distributed military sensor networks (where we would like
the ?capture? of one sensor to reveal as little non-local state information as possible), settings where
networks of financial organizations would like to share limited information, and so on.
By a privacy-preserving version of inference (for example), we informally mean a protocol in which
each party learns their conditional probability of exposure to the disease and absolutely nothing else.
More precisely, anything a party can efficiently compute after having participated in the protocol,
they could have efficiently computed alone given only the value of their conditional probability ?
thus, the protocol leaked no additional information beyond its desired outputs. Classical and powerful tools from cryptography [6] provide solutions to this problem, but with the significant drawback
of entirely centralizing the privacy-preserving computation. Put another way, the straightforward
solution from cryptography requires every party in the network to have the ability to broadcast to
all others, and the resulting algorithm may bear little resemblance to standard belief propagation.
Clearly this is infeasible in settings where the network is very large and entirely distributed, where
individuals may not even know the size of the overall network, much less its structure and the
identity of its constituents. While there has been work on minimizing the number of messages exchanged in centralized privacy-preserving protocols [9], ours are the first results that preserve the
local communication structure of distributed algorithms like belief propagation.
Our protocols are faithful to the network topology, requiring only the passing of messages between
parties separated by one or two hops in the network. Furthermore, our protocols largely preserve
the algebraic structure of the original algorithms (for instance, the sum-product structure of belief
propagation) and enjoy all the correctness guarantees of the originals (such as exact inference in
trees for belief prop or convergence of Gibbs sampling to the joint distribution). Our technical
methods show how to blend tools from cryptography (secure multiparty computation and blindable
encryption) with local message-passing algorithms in a way that preserves the original computations,
but in which all messages appear to be randomly distributed from the viewpoint of any individual.
1
All results in this paper apply to the ?semi-honest? or ?honest but curious? model in the cryptography
literature, in which participants obediently execute the protocol but may attempt to infer non-private
information from it. We expect that via the use of zero-knowledge proof techniques, our protocols
may be strengthened to models in which participants who deviate from the protocol are detected.
2
Background and Tools from Cryptography
2.1
Secure Multiparty Function Computation
Let f (x1 , . . . , xk ) be any function on k inputs. Imagine a scenario in which there are k distinct
parties, each in possession of exactly one of these inputs (that is, party i initially knows x i ) and the
k parties would like to jointly compute the value of f (x1 , . . . , xk ). Perhaps the simplest protocol
would have all parties share their private inputs and then individually compute the value of f . However, in many natural settings, we would like the parties to be able to perform this joint computation
in a privacy-preserving fashion, with each party revealing as little as possible about their private
input. Simple examples include voting ? we would all like to learn the results of the election without having to broadcast our private votes ? and the so-called ?Millionaire?s Problem? in which two
individuals would like to learn who is wealthier, without revealing their precise wealth to each other.
If a trusted ?third party? is available, one solution would be to provide the private inputs to them,
and have them perform the computation in secrecy, only announcing the final result. The purpose
of the theory of secure multiparty function computation [6] is to show that under extremely general
circumstances, a third party is surprisingly unnecessary.
Note that it is typically inevitable that some information is revealed just by the result of the computation of f itself. For example, in the Millionaire?s Problem, there is no avoiding the poorer party
learning a lower bound on the richer?s wealth (namely, the poorer party?s wealth). The goal is thus
to reveal nothing beyond what it implied by the value of f .
To formalize this notion in a complexity-theoretic framework, let us assume without loss of generality that each input xi is n bits in length. We make the natural and common assumptions that the
function f can be computed in time polynomial in kn, and that each party?s computational resources
are bounded by a polynomial in n. We (informally) define a protocol ? for the k parties to compute
f to be a specific mechanism by which the parties exchange messages and perform computations,
ending with every party learning the value y = f (x1 , . . . , xk ). One (uninteresting) protocol is the
one in which each party sends their private inputs to all others, and every party computes y alone.
Definition 1 1 Let ? be any protocol for the k parties to jointly compute the value y =
f (x1 , . . . , xk ) from their n-bit private inputs. We say that ? is privacy-preserving if for every
1 ? i ? k, anything that party i can compute in time polynomial in n following the execution
of ?, they could also compute in polynomial time given only their private input x i and the value y.
In other words, whatever information party i is able to obtain from their view of the execution of
protocol ?, it does not let them efficiently compute anything they couldn?t efficiently compute just
from being told the final output y of ? (and their private input xi ). This captures the notion that
while y itself may ?leak? some information about the other private inputs xj , the protocol ? yields
nothing further.2 Further, for the following theorem we can consider more general vector outputs
and randomized functionalities, which we need for our technical results.
Theorem 1 (See e.g. [6]) Let f (x1 , . . . , xk ) = (y1 , . . . , yk ) be any (possibly randomized) k-input,
k-output functionality that can be computed in polynomial time. Then under standard cryptographic
assumptions, 3 there exists a polynomial time privacy-preserving protocol ? for f (that is, a protocol
in which party i learns nothing not already implied by their private input xi and private output yi ).
1
We state this definition informally, as the complete technical definition is somewhat lengthy and adds little
intuition. It involves both formalizing the notion of a multiparty computation protocol, as well as defining the
?view? of an individual party of a specific execution of the protocol. The definition involves computational
indistinguishability of probability distributions since the protocols may often use randomization.
2
Our definition of privacy does not imply that coalitions of parties cannot together compute additional
information. In the extended version of this paper, we discuss the difficulty of achieving this stronger notion of
privacy with any protocol that uses a truly distributed method of computation.
3
An example would be the existence of trapdoor permutations [6].
2
This remarkable and important theorem essentially says that whatever a population can jointly compute, it can jointly compute with arbitrary restrictions on who learns what. A powerful use of vector
outputs is to enforce knowledge asymmetries on the parties. For instance, in the Millionaire?s Problem, by defining one player?s output to always be nil, we can ensure that this player learns absolutely
nothing from the protocol, while the other learns which player is wealthier.
The proof of Theorem 1 is constructive, providing an algorithm to transform any polynomial circuit into a polynomial-time privacy-preserving protocol for k parties. As discussed in the introduction, this theorem can be immediately applied to (say) belief propagation to yield centralized
privacy-preserving protocols for inference; our contribution is preserving the highly distributed, local message-passing structure of belief propagation and similar algorithms.
2.2
Public-Key Encryption with Blinding
The second cryptographic primitive that we shall require is standard public-key encryption with an
additional property known as blinding. A standard public-key cryptosystem allows any party to
generate a pair of keys (P, S), which we can think of as k-bit strings; k is often called the security
parameter. Associated with the public key P there is a (possibly probabilistic) encryption function
EP and associated with the secret or private key S there is a (deterministic) decryption function D S .
Informally, the system should have the following security properties:
? For any n-bit x, the value of the function EP (x) can be computed in polynomial time from
inputs x and P . Similarly, DS (y) can be computed efficiently given y and S.
? For any n-bit input x, DS (EP (x)) = x. Thus, decryption is the inverse of encryption.
? For any n-bit x, it is hard for a party knowing only the public key P and the encryption
EP (x) to compute x. 4
Thus, in such a scheme, anyone knowing the public key of Alice can efficiently compute and send
encrypted messages to Alice, but only Alice, who is the sole party knowing her private key, can
decrypt those messages. Such cryptosystems are widely believed to exist and numerous concrete
proposals have been examined for decades. As one specific example that allows probabilistic encryption of individual bits, let the public key consist of an integer N = p ? q that is the product of
two k/2-bit randomly generated prime numbers p and q, as well as a number z that has the property
that z is not equal to x2 mod N for any x. It is easy to generate such (N, z) pairs. In order to
encrypt a 0, one simply chooses x at random and lets the encryption be y = x 2 mod N , known
as a quadratic residue. In order to encrypt a 1, one instead sends y = x2 ? z mod N , which is
guaranteed to not be a quadratic residue. It is not difficult to show that given the factors p and q
(which constitute the secret key), one can efficiently compute whether y is a quadratic residue and
thus learn the decrypted bit. Furthermore, it is widely believed that decryption is actually equivalent
to factoring N , and thus intractable without the secret key.
This simple public-key cryptosystem also has the additional blinding property that we will require.
Given only the public key (N, z) and an encrypted bit y as above, it is the case that for any value w,
w2 y mod N is a quadratic residue if and only if y is a quadratic residue, and that furthermore w 2 y
mod N is uniformly distributed among all (non-)quadratic residues if y is a (non-)quadratic residue.
Thus, a party knowing only Alice?s public key can nevertheless take any bit encrypted for Alice and
generate random re-encryptions of that bit without needing to actually know the decryption. We
refer to this operation as blinding an encrypted bit.
3
Privacy-Preserving Belief Propagation
In this section we briefly review the standard algorithm for belief propagation on trees [10] and
outline how to run this algorithm in a privacy-preserving manner such that each variable learns only
its final marginals and no additional new information that is not implied by these marginals.
In standard belief propagation, we are given an undirected graphical model with vertex set X for
which the underlying network is a tree. We denote by V(Xi ) the set of possible values of Xi ? X ,
4
This is often formalized by asserting that the distribution of the encryption is computationally indistinguishable from true randomness in time polynomial in n and k.
3
and by N (Xi ) the set of Xi ?s neighbors. For each Xi ? X , we are given a non-negative potential
function ?i over possible values xi ? V(Xi ). Similarly, for each pair of adjacent vertices Xi and
Xj , we are given a non-negative potential function ?i,j over joint assignments to Xi and Xj .
The main inductive phase of the belief propagation algorithm is the message-passing phase. At each
step, a node Xi computes a message ?i?j to send to some Xj ? N (Xi ). This message is indexed
by all possible assignments xj ? V(Xj ), and is defined inductively by
X
Y
?i?j (xj ) =
?i (xi )?i,j (xi , xj )
?k?i (xi ).
(1)
xi ?V(Xi )
Xk ?N (Xi )\Xj
Belief propagation follows the so-called message-passing protocol, in which any vertex of degree d
that has received the incoming messages from any d?1 of its neighbors can perform the computation
above in order to send an outgoing message to its remaining neighbor. Eventually, the vertex will
receive a message back from this last neighbor, at which point it will be able to calculate messages
to send to its remaining d ? 1 neighbors. The protocol begins at the leaves of the tree, and it
follows from standard arguments that until all nodes have received incoming messages from all of
their neighbors, there must be some vertex that is ready to compute and send a new message. The
message-passing phase ends when all vertices have received messages from all of their neighbors.
Once vertex Xi has received all of its incoming messages, the marginal distribution P is proportional
to their product. That is, if xi is any setting to Xi , then
Y
P[Xi = xi ] ? ?i (xi )
?j?i (xi ).
(2)
Xj ?N (Xi )
When there is evidence in the network, represented as a partial assignment ~e to some subset E of the
variables, we can simply hard-wire this evidence into the potential functions ? j for each Xj ? E. In
this case it is well-known that the algorithm computes the conditional marginals P[X i = xi |E = ~e].
For a more in-depth review of belief propagation, see Yedidia et al. [13] or Chapter 8 of Bishop [1].
3.1
Mask Propagation and the Privacy-Preserving Protocol
We assume that at the beginning of the privacy-preserving protocol, each node X i knows its own
individual potential function ?i , as well as the joint potential functions ?i,j for all Xj ? N (Xi ).
Recall that our fundamental privacy goal is to allow each vertex Xi to compute its own marginal
distribution P[Xi = xi ] (or P[Xi = xi |E = ~e] if there is evidence), but absolutely nothing else.
In particular, Xi should not be able to compute the values of any of the incoming messages from its
neighbors. Knowledge of ?j?i (xi ), for example, along with ?i?j and ?i,j , may give Xi information about the marginals over Xj , a clear privacy violation. We thus must somehow prevent Xi from
being able to ?read? any of its incoming messages ? nor even its own outgoing messages ? yet
still allow each variable to learn its own set of marginals at the end. To accomplish this we combine
tools from secure multiparty function computation with a method we call ?mask propagation?, in
which messages remain ?masked? (that is, provably unreadable) to the vertices at all times. The
keys required to unmask the messages are generated locally as the computation propagates through
the tree, thus preserving the original communication pattern of the standard (non-private) algorithm.
Before diving into the secure protocol, we first must fix conventions regarding the encoding of
numerical values. We will assume throughout that all potential function values, all message values
and all the required products computed by the algorithm can be represented as n-bit natural numbers
and thus fall in ZN = {0, . . . , N ? 1} where N = 2n . As expressed by Equation (2), marginal
probabilities are obtained by taking products of such n-bit numbers and then normalizing to obtain
finite-precision real-valued numbers in the range [0, 1]. It will be convenient to think of values in Z N
as elements of the cyclic group of order N with addition and subtraction modulo N . In particular,
we will make frequent use of the following simple fact: for any fixed x ? ZN , if r ? ZN is chosen
randomly among all n-bit numbers, then x + r mod N is also distributed randomly among all n-bit
numbers. We can think of the random value r as ?masking? or hiding the value of x to a party that
does not know r, while leaving it readable to a party that does.
Let us now return to the message-passing phase of the algorithm described by Equation (1), and let
us focus on the computation of ?i?j for a fixed setting xj of Xj . For the secure version of the
algorithm, we make the following inductive message and knowledge assumptions:
4
? For each X` ? N (Xi )\Xj , and for each setting xi of Xi , Xi has already obtained a masked
version of ?`?i (xi ):
?`?i (xi ) + ?j,` (xi ) mod N
(3)
where ?j,` (xi ) is uniformly distributed in ZN .
? Xi knows only the sum in Equation (3) (which again is uniformly distributed in Z N and
thus meaningless by itself), and does not know the masking values ?j,` (xi ).
? Vertex Xj knows only the masking values ?j,` (xi ), and not the sum in Equation (3).
For all leaf nodes, these assumptions hold trivially at the start of the protocol, providing the base
case for the induction. Now under these informational assumptions, vertex X i knows the set Ii =
{?`?i (xi ) + ?j,` (xi ) mod N : X` ? N (Xi )\Xj , xi ? V(Xi )} while vertex Xj knows the set
Ij = {?j,` (xi ) mod N : X` ? N (Xi )\Xj , xi ? V(Xi )}.
Let us first consider the case in which Xj is not a leaf node and thus has neighbors other than Xi
itself. In order to complete the inductive step, it will be necessary for each Xk ? N (Xj )\Xi to
provide a set of masking values ?k,i (xj ) so that Xj can obtain a set of masked messages of the form
?i?j (xj ) + ?k,i (xj ). Here we focus on a single neighbor Xk of Xj .
Vertex Xk privately generates a masking value ?k,i (xj ) that is uniformly distributed in
Zn .
It is clearQthat, ignoring privacy concerns, Xi and Xj together could compute
?i (xi )?i,j (xi , xj ) X` ?N (Xi )\Xj ?`?i (xi ) for each fixed pair xi and xj . Thus from their joint
inputs Ii , Ij , and ?k,i (xj ), ignoring privacy, Xi , Xj , and Xk could compute:
?
?
X
Y
?
?i (xi )?i,j (xi , xj )
?`?i (xi )? + ?k,i (xj ) mod N
xi ?V(Xi )
X` ?N (Xi )\Xj
= ?i?j (xj ) + ?k,i (xj )
mod N
(4)
Since this expression can be computed jointly by Xi , Xj and Xk without privacy considerations,
Theorem 1 establishes that we can construct an efficient protocol for them to compute it securely,
allowing Xj to learn only the value of the expression in Equation (4), while Xi and Xk learn no new
information at all (i.e. nil output). Note that this expression, due to the presence of the unknown
masking value ?k,i (xj ), is a uniform randomly distributed number in Zn from Xj ?s point of view.
After this masking process has been completed for all Xk ? N (Xj )\Xi , we will have begun to
satisfy the inductive informational assumptions a step further in the propagation: for each neighbor
Xk of Xj excluding Xi , Xj will know a masked version of ?i?j (xj ) in which the masking value
?k,i (xj ) is known only to Xk . Xj will obtain masked messages in a similar manner from all but one
of its other neighbors in turn, and for all of its other values, until the inductive assumptions are fully
satisfied at Xj . Every value received by Xi , Xj , and Xk during the above protocol is distributed
uniformly at random in Zn from the perspective of its recipient, and thus conveys no information.
It remains to consider the case in which Xj is a leaf node. In this case, there is no need to satisfy
the inductive assumptions at the next level, as the propagation ends at the leaves. Furthermore, it is
acceptable for Xj to learn its incoming messages directly, since these messages will be implied by
its final marginal. From their joint input Ii and Ij , it is clear that Xi and Xj together could compute
?i?j (xj ) as given in Equation (1). Thus by Theorem 1, we can construct a protocol for them to
efficiently compute this value in such a way that Xj learns only ?i?j (xj ) and Xi learns nothing.
At the end of the message-passing phase, each internal (non-leaf) node Xi will know a set of masked
messages from each of its neighbors. In particular, for each pair Xj , X` ? N (Xi ), for each xi ?
V(Xi ), Xi will know the values of ?j?i (xi ) + ?`,j (xi ). Ignoring privacy concerns, it is the case
that Xi and any pair of its neighbors could compute the marginal of Xi in Equation (2). Invoking
Theorem 1 again, we can construct an efficient protocol for Xi and this pair of neighbors to together
compute the marginals such that Xi learns only the marginals and the neighbors learn nothing.
Each leaf vertex Xi will be in possession of its unmasked messages ?j?i (xi ) for every xi ? V(Xi )
from its neighbor Xj , and can easily compute its marginals as given in Equation (2) without having
learned anything not already implied by its initial potential functions and the marginals themselves.
We use PrivateBeliefProp(T ) to denote the algorithm above when applied to a particular tree T .
The full proof of the following is omitted, but follows the logic sketched in the preceding sections.
5
Theorem 2 Under standard cryptographic assumptions, PrivateBeliefProp(T ) allows every variable Xi to compute its own marginal distribution P[Xi ] and nothing else (that is, nothing not already computable in polynomial time from only P[Xi ] and the initial potential functions). Direct
communication occurs only between variables who are immediate neighbors or two steps away in
T , and secure function computation is never invoked on sets of more than three variables. 5
We briefly note a number of extensions to Theorem 2 and the methods described above.
Loopy Belief Propagation: Theorem 2 can be extended to privacy-preserving loopy belief propagation on graphs that contain cycles. Because of the protocol?s faithfulness to the original algorithm,
the same convergence and correctness claims hold as in standard loopy belief propagation [7].
Computing Only Partial Information: Allowing a variable to learn its exact numerical marginal
distribution may actually convey a great deal of information. We might instead only want each
variable to learn, for instance, whether its probability of taking on a given value is greater than
0.1 or not. Theorem 2 can easily be generalized to allow each variable to learn only any partial
information about its own marginal.
Privacy-Preserving Junction Tree: The protocol can also be modified to perform privacypreserving belief propagation on a junction tree [11]. Here it is necessary to take intra-clique privacy
into account in order to enforce that variables can learn only their own marginals and not, for example, the marginals of other nodes within the same clique.
NashProp and Other Message-Passing Algorithms: The methods described here can also be
applied to provide privacy-preserving versions of the NashProp algorithm [8], allowing players in
a multiparty game to jointly compute and draw actions from a Nash equilibrium, with each player
learning only his own action and nothing else.6 We are investigating more general applications of
our methods to a broad class of message-passing algorithms that would include many others.
4
Privacy-Preserving Gibbs Sampling
We now move on to the problem of secure Gibbs sampling on an undirected graphical model G. The
local potential functions accompanying G can be preprocessed to obtain conditional distributions for
each variable given a setting of all its neighbors (Markov blanket). Thus we henceforth assume that
each variable has access to its local conditional distribution, which it will be convenient to represent
in a particular tabular form. To simplify presentation, we will assume each variable is binary, taking
on values in {0, 1}, but this assumption is easy to relax.
If a node Xi is of degree d, the conditional distribution of Xi given a particular assignment to its
neighbors will be represented by a table Ti with 2d rows and d + 1 columns. The first d columns
range over all 2d possible assignments ~x to N (Xi ), while the final column contains the numerical
value P[Xi = 1|N (Xi ) = ~x]. We will use Ti (~x) to denote the value P[Xi = 1|N (Xi ) = ~x] stored
in the d + 1st column in the row corresponding to the assignment ~x.
With this notation, the standard (non-private) Gibbs sampling algorithm [4, 2] can be easily described. After choosing an initial assignment to all of the variables in G (for instance, uniformly at
random), the algorithm repeatedly resamples values for individual variables conditioned on the current values of their neighbors. More precisely, at each step, a variable Xi is chosen for resampling.
Its current value is replaced by randomly drawing value 1 with probability T i (~x) and value 0 with
probability 1 ? Ti (~x) where ~x is the current set of assignments to N (Xi ).
To implement a privacy-preserving variant of Gibbs sampling, we must solve the following cryptographic problem: how can a set of vertices communicate with their neighbors in order to repeatedly
resample their values from their conditional distributions given their neighbors? current assignments,
without learning any information except their own final values at the end of the process and anything
that is implied by these values? Again, we would like to accomplish this with limited communication
so that no vertex is required to communicate with a vertex more than two hops away.
5
Since the application of standard secure function computation requires broadcast among all participants, it
is a feature of the algorithm that it limits such invocations to three parties at a time.
6
See work by Dodis et al. [3] and Teague [12] for more on privacy-preserving computation in game theory.
6
In order for each variable to learn only its final sampled value after some number of iterations, and
not its intermediate resampled values (which may be enough to provide a good approximation of the
marginal distribution on the variable), we first provide a way of distributing the current value of a
vertex so that it cannot be learned by any vertex in isolation. One way of accomplishing this is by
assigning each vertex Xi a ?distinguished neighbor? N ? (Xi ). Xi will hold one bit bi while N ? (Xi )
will hold a second bit b0i such that the current value of Xi is bi ? b0i .
Using such an encoding, there is a simple but relatively inefficient construction for privacypreserving Gibbs sampling that uses only secure multiparty function computation, but that invokes
Theorem 1 on entire neighborhoods of the graph. In graphs with high degree, this requires broadcast communication between a large number of parties, which we would like to avoid. Here we
describe a much more communication-efficient protocol using blinded encryption. For concreteness the reader may imagine below that we are using the blindable cryptosystem based on quadratic
residues described in Section 2.2, though other choices are possible.
We begin by describing a sub-protocol for preprocessing the table Ti before resampling begins. Let
S be the 2d indices of the rows of the table Ti . For ease of notation, we will refer to the d neighbors
of Xi as V1 , . . . , Vd . The purpose of the sub-protocol is for Xi and its neighbors to compute a
random permutation ? of S (which can be thought of as a random permutation of the rows of T i ) in
such a way that during the protocol, each Vj ? N (Xi ) learns only the sets {?(~x) : Vj = 0} and
{?(~x) : Vj = 1} and Xi learns nothing.
The sub-protocol is quite simple. First each neighbor Vj of Xi encrypts column j of Ti using its
own public key and passes the encrypted column to Xi . Next Xi encrypts column d + 1 using its
own public key. Xi then concatenates the d + 1 encrypted columns together to form an encrypted
version of Ti in which column j is encrypted using the public key of Vj for 1 ? j ? d and column
d + 1 is encrypted using the public key of Xi . Xi then takes the resulting table, randomly permutes
the rows, and blinds (randomly re-encrypts) each entry using the appropriate public keys (i.e. the
key of Vj for column j where 1 ? j ? d and its own public key for column d + 1). At this point,
Xi sends the resulting table to its distinguished neighbor N ? (Xi ).
The purpose of the blinding steps here is to prevent parties from tracking correspondences between
cleartext and encrypted table entries. For instance, without blinding above, N ? (Xi ) could reconstruct the permutation chosen by Xi by seeing how its own encrypted values have been rearranged.
Now from the perspective of N ? (Xi ), d columns of the table will look like uniformly distributed
random bits. N ? (Xi ) will still be able to decrypt the column of the table that corresponds to its own
values, but it will become clear that decrypting this column alone cannot yield useful information.
In the next step in the protocol, N ? (Xi ) re-encrypts column d + 1 of the table with its own public
key. It then randomly permutes the rows of the table, blinds each entry using the appropriate public
keys (those of Vj for columns 1 ? j ? d and its own for column d + 1), and sends the updated
table back to Xi . At this point, every entry in the table will look random bits to Xi . Each column
j will be encrypted by the public key of Vj , with the exception of the final column, which will be
encrypted by both Xi and N ? (Xi ). Call this new table Ti0 .
Once these encrypted tables have been computed for each node, we begin the main Gibbs sampling
protocol. We inductively assume that at the start of each step, for each Xj ? X , the current value
of Xj is distributed between Xj and N ? (Xj ). At the end of the step, the only information that has
been learned is the new value of a particular node Xi , but distributed between Xi and N ? (Xi ).
Consider a neighbor Vj of Xi . Vj can decrypt column j of Ti0 in order to learn which rows correspond
to its value being 0 and which rows correspond to its values being 1. While Vj alone does not know
what its current value is, Vj and N ? (Vj ) could compute it together, and thus could together figure
out which rows of the permutation correspond to Vj ?s current value. By Theorem 1, since there is a
way for them to compute this information ignoring privacy, we can construct an efficient protocol for
Vj , N ? (Vj ), and Xi to perform this computation such that Xi learns only the rows that correspond
to Vj ?s value (and in particular does not learn what this value is), while Vj and N ? (Vj ) learn nothing.
After this secure computation of partitions has been completed for all neighbors of X i , Xi will be
able to compute the intersection of the subsets of rows it has received from each neighbor. This
intersection will be a single row corresponding to the current values of all nodes in N (X i ). Initially,
Xi will not be able to decrypt any of the entries in this row. However, Xi and N ? (Xi ) could together
7
decrypt the value in column d + 1, use this value in order to sample Xi ?s new value according to the
appropriate distribution, and distribute the new value between themselves. Calling upon Theorem 1
once again, this means that we can construct an efficient protocol for Xi and N ? (Xi ) to together
complete these computations in such a way that they only learn the new bits b i and b0i respectively.
Each time the value of a node Xi is resampled, Xi and N ? (Xi ) repeat the process of blinding and
permuting the rows of Ti0 . This prevents Xi and its neighbors from learning how frequently they
take on different values throughout the sampling process. After the value of each node has been
privately resampled sufficiently many times, we can use one final application of secure multi-party
computation between each node Xi and its distinguished neighbor N ? (Xi ) to allow Xi to learn its
final value.
As with standard Gibbs sampling, we also need to specify a schedule by which vertices in the
Markov network will have their values updated, as well as the number of iterations of this schedule,
which will in turn determine how close the sampled distribution is to the true joint (stationary)
distribution. Since our interests are in privacy considerations only, let us use PrivateGibbs to
refer to the protocol described above when applied to any fixed Markov network, combined with
some fixed updating schedule (such as random or a fixed ordering) and some number r of iterations.
Theorem 3 Under standard cryptographic assumptions7 , PrivateGibbs computes a sample from
the joint distribution after r iterations, with every variable learning its own value and nothing else.
Direct communication occurs only between variables who are immediate neighbors or two steps
away, and secure function computation is never invoked on sets of more than three variables.
The full proof is again omitted, but largely follows the sketch above. We note that PrivateGibbs enjoys an even stronger privacy property ? even if any subset of parties collude by combining their
post-protocol views, they can learn nothing not implied by their combined sampled values. Furthermore, any convergence guarantees that hold for standard Gibbs sampling [4, 5] with the same
updating schedule will also hold for the secure version.
References
[1] C. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[2] G. Casella and E. George. Explaining the Gibbs sampler. The American Statistician, 46:167?174, 1992.
[3] Y. Dodis, S. Halevi, and T. Rabin. A cryptographic solution to a game theoretic problem. In CRYPTO,
pages 112?130, 2000.
[4] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of
images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721?741, 1984.
[5] A. Gibbs. Bounding convergence time of the Gibbs sampler in Bayesian image restoration. Biometrika,
87:749?766, 2000.
[6] O. Goldreich. Foundations of Cryptography, Volume 2. Cambridge University Press, 2004.
[7] A. Ihler, J. Fisher III, and A. Willsky. Loopy belief propagation: Convergence and effects of message
errors. Journal of Machine Learning Research, 6:905?936, 2005.
[8] M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In Uncertainty in Artificial
Intelligence, 2001.
[9] M. Naor and K. Nissim. Communication preserving protocols for secure function evaluation. In ACM
Symposium on Theory of Computing, pages 590?599, 2001.
[10] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
[11] P. Shenoy and G. Shafer. Axioms for probability and belief-function propagation. In Uncertainty in
Artificial Intelligence, pages 169?198, 1990.
[12] V. Teague. Selecting correlated random actions. In Financial Cryptography, pages 181?195, 2004.
[13] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. In
Exploring Artificial Intelligence in the New Millennium. Morgan Kaufmann, 2003.
7
An example would be intractability of recognizing quadratic residues.
8
| 3302 |@word private:16 briefly:2 version:10 polynomial:11 stronger:2 invoking:1 initial:3 cyclic:1 contains:1 selecting:1 ours:1 current:10 collude:1 yet:1 assigning:1 must:4 numerical:3 partition:1 unmask:1 resampling:2 alone:4 stationary:1 leaf:7 intelligence:4 decrypted:1 xk:16 beginning:1 node:15 along:1 direct:2 become:1 symposium:1 naor:1 combine:1 decrypt:5 manner:2 privacy:34 mask:2 secret:3 themselves:2 nor:1 frequently:1 multi:1 freeman:1 informational:2 little:4 election:1 hiding:1 begin:4 bounded:1 formalizing:1 circuit:1 underlying:1 notation:2 what:4 string:1 possession:2 guarantee:2 every:9 voting:1 ti:7 exactly:1 biometrika:1 whatever:2 indistinguishability:1 enjoy:1 appear:1 shenoy:1 before:2 local:10 limit:1 encoding:2 might:1 examined:1 alice:5 ease:1 limited:2 range:2 bi:2 faithful:1 implement:1 axiom:1 thought:1 revealing:2 convenient:2 centralizing:1 word:1 seeing:1 cannot:3 undesirable:1 close:1 put:1 restriction:1 equivalent:1 deterministic:1 send:5 exposure:4 straightforward:1 primitive:1 cryptosystems:1 formalized:1 immediately:1 permutes:2 financial:2 his:1 population:1 notion:4 updated:2 imagine:3 tan:1 construction:1 engage:1 exact:2 modulo:1 us:2 pa:1 element:1 recognition:1 updating:2 geman:2 ep:4 capture:2 calculate:1 cycle:1 ordering:1 yk:1 disease:3 intuition:1 leak:1 complexity:1 nash:1 ti0:3 littman:1 inductively:2 singh:1 upon:1 easily:3 joint:8 goldreich:1 represented:3 chapter:1 separated:1 distinct:1 describe:1 detected:1 artificial:3 couldn:1 choosing:1 neighborhood:1 quite:1 richer:1 widely:2 valued:1 solve:1 say:3 relax:1 drawing:1 reconstruct:1 plausible:1 ability:1 think:3 jointly:6 itself:4 transform:1 final:11 obviously:1 product:5 frequent:1 combining:1 constituent:1 convergence:5 asymmetry:1 encryption:11 ij:3 sole:1 received:6 involves:2 blanket:1 convention:1 drawback:1 announcing:1 functionality:2 stochastic:1 human:1 public:19 exchange:1 require:2 fix:1 generalization:1 randomization:1 extension:1 exploring:1 hold:6 accompanying:1 sufficiently:1 great:2 equilibrium:1 claim:1 omitted:2 resample:1 purpose:3 individually:1 correctness:2 establishes:1 tool:4 trusted:1 clearly:1 sensor:2 always:1 modified:1 avoid:1 b0i:3 focus:2 secure:15 inference:5 factoring:1 typically:1 entire:1 initially:2 her:1 interested:1 provably:3 sketched:1 overall:1 among:4 marginal:9 equal:1 once:3 construct:5 having:4 never:2 sampling:13 hop:2 broad:1 look:2 inevitable:1 tabular:1 others:3 simplify:1 intelligent:1 randomly:9 preserve:3 individual:7 replaced:1 phase:5 statistician:1 attempt:1 organization:1 centralized:2 message:40 interest:1 highly:2 intra:1 evaluation:1 violation:1 truly:1 permuting:1 poorer:2 partial:3 necessary:2 tree:8 indexed:1 exchanged:1 desired:1 re:3 instance:5 military:1 column:22 rabin:1 zn:7 assignment:9 restoration:2 loopy:4 vertex:22 subset:3 entry:5 uninteresting:1 masked:6 recognizing:1 wortman:1 uniform:1 millionaire:3 stored:1 kn:1 accomplish:2 chooses:1 combined:2 st:1 fundamental:1 randomized:2 told:1 probabilistic:3 michael:1 together:9 concrete:1 again:5 decryption:4 satisfied:1 broadcast:4 possibly:2 henceforth:1 american:1 inefficient:1 return:1 account:1 potential:9 distribute:1 satisfy:2 depends:1 blind:2 view:4 start:2 participant:4 masking:8 contribution:1 accomplishing:1 kaufmann:2 largely:2 efficiently:8 who:6 yield:3 correspond:4 bayesian:2 randomness:1 casella:1 lengthy:1 definition:5 conveys:1 proof:4 associated:2 ihler:1 sampled:3 begun:1 recall:1 knowledge:4 dodis:2 formalize:1 schedule:4 actually:3 back:2 specify:1 wei:1 execute:1 though:1 generality:1 furthermore:5 just:2 until:2 d:2 sketch:1 propagation:26 somehow:1 perhaps:1 reveal:2 resemblance:1 effect:1 requiring:1 true:2 contain:1 inductive:6 read:1 deal:2 leaked:1 indistinguishable:1 adjacent:1 self:1 during:2 game:4 anything:5 generalized:1 outline:1 theoretic:2 complete:3 reasoning:1 resamples:1 image:2 consideration:2 invoked:2 common:1 volume:1 discussed:1 marginals:11 significant:1 refer:3 cambridge:1 gibbs:15 trivially:1 similarly:2 access:1 add:1 base:1 own:17 perspective:2 encrypt:2 diving:1 prime:1 scenario:1 binary:1 yi:1 preserving:24 morgan:2 additional:5 somewhat:1 preceding:1 greater:1 unreadable:1 george:1 subtraction:1 determine:1 semi:1 ii:3 full:2 needing:1 infer:1 technical:3 believed:2 post:1 cryptosystem:3 variant:1 circumstance:1 essentially:1 iteration:4 represent:1 wealthier:2 encrypted:14 proposal:1 background:1 residue:9 participated:1 receive:1 addition:1 want:1 else:6 wealth:3 leaving:1 sends:4 w2:1 meaningless:1 pass:1 privacypreserving:2 undirected:2 mod:11 integer:1 call:2 curious:1 presence:1 revealed:1 intermediate:1 easy:2 enough:1 iii:1 xj:63 isolation:1 pennsylvania:1 topology:1 regarding:1 knowing:4 computable:1 honest:2 whether:2 expression:3 distributing:1 blinded:1 algebraic:1 passing:11 constitute:1 action:3 repeatedly:2 useful:1 clear:3 informally:4 locally:1 simplest:1 rearranged:1 generate:3 exist:1 shall:1 group:1 key:26 nevertheless:1 achieving:1 prevent:2 preprocessed:1 v1:1 graph:3 relaxation:1 concreteness:1 sum:3 run:1 inverse:1 powerful:2 communicate:2 uncertainty:2 multiparty:8 throughout:2 reader:1 draw:1 acceptable:1 bit:22 entirely:2 bound:1 resampled:3 guaranteed:1 correspondence:1 quadratic:9 paramount:1 precisely:2 x2:2 calling:1 generates:1 anyone:1 extremely:1 argument:1 relatively:1 department:1 according:1 coalition:1 remain:1 computationally:1 resource:1 equation:8 remains:1 jennifer:1 discus:1 eventually:1 mechanism:1 turn:2 describing:1 know:14 crypto:1 end:6 available:1 operation:1 junction:2 yedidia:2 apply:1 away:3 enforce:2 appropriate:3 distinguished:3 existence:1 original:5 recipient:1 remaining:2 include:3 ensure:1 completed:2 graphical:3 readable:1 invokes:1 classical:1 contact:1 leakage:1 implied:7 move:1 already:4 occurs:2 blend:1 vd:1 secrecy:1 nissim:1 induction:1 willsky:1 length:1 index:1 providing:2 minimizing:1 difficult:1 negative:2 cryptographic:6 unknown:1 perform:6 allowing:3 wire:1 markov:3 finite:1 immediate:3 defining:2 extended:2 communication:8 precise:1 excluding:1 y1:1 arbitrary:1 namely:1 pair:7 required:3 faithfulness:1 security:2 learned:3 pearl:1 beyond:2 able:8 below:1 pattern:3 including:1 belief:23 natural:3 difficulty:1 blinding:7 scheme:1 millennium:1 imply:1 numerous:1 ready:1 philadelphia:1 deviate:1 review:2 literature:1 understanding:1 loss:1 expect:1 bear:1 permutation:5 fully:1 proportional:1 remarkable:1 foundation:1 agent:1 degree:3 proxy:1 propagates:1 viewpoint:1 intractability:1 share:2 row:14 surprisingly:1 last:1 repeat:1 infeasible:1 enjoys:1 allow:4 neighbor:36 fall:1 taking:3 explaining:1 distributed:18 regard:1 depth:1 ending:1 computes:4 asserting:1 preprocessing:1 party:43 social:1 transaction:1 logic:1 clique:2 incoming:6 investigating:1 halevi:1 unnecessary:1 xi:154 decade:1 table:14 learn:20 concatenates:1 ignoring:4 protocol:49 vj:19 main:2 privately:2 teague:2 bounding:1 shafer:1 nothing:16 cryptography:7 convey:1 x1:5 contracted:1 securely:1 strengthened:1 fashion:1 nashprop:2 precision:1 sub:3 invocation:1 third:2 learns:13 theorem:16 specific:3 contagious:1 bishop:2 evidence:4 normalizing:1 exists:1 consist:1 intractable:1 concern:2 execution:3 conditioned:2 intersection:2 simply:2 prevents:1 expressed:1 tracking:1 springer:1 corresponds:1 acm:1 prop:1 conditional:7 identity:1 goal:2 presentation:1 fisher:1 hard:2 except:1 uniformly:7 sampler:2 kearns:2 called:3 nil:2 player:5 vote:1 exception:1 internal:1 absolutely:4 constructive:1 outgoing:2 avoiding:1 correlated:1 |
2,540 | 3,303 | Progressive mixture rules are deviation suboptimal
Jean-Yves Audibert
Willow Project - Certis Lab
ParisTech, Ecole des Ponts
77455 Marne-la-Vall?ee, France
audibert@certis.enpc.fr
Abstract
We consider the learning task consisting in predicting as well as the best function
in a finite reference set G up to the smallest possible additive term. If R(g) denotes
the generalization error of a prediction function g, under reasonable assumptions
on the loss function (typically satisfied by the least square loss when the output is
bounded), it is known that the progressive mixture rule g? satisfies
(1)
ER(?
g ) ? ming?G R(g) + Cst log |G| ,
n
where n denotes the size of the training set, and E denotes the expectation w.r.t.
the training set distribution.This work shows that, surprisingly, for appropriate
reference sets G, the?deviation convergence rate of the progressive mixture rule is
no better than Cst / n: it fails to achieve the expected Cst /n. We also provide
an algorithm which does not suffer from this drawback, and which is optimal in
both deviation and expectation convergence rates.
1
Introduction
Why are we concerned by deviations? The efficiency of an algorithm can be summarized by its
expected risk, but this does not precise the fluctuations of its risk. In several application fields of
learning algorithms, these fluctuations play a key role: in finance for instance, the bigger the losses
can be, the more money the bank needs to freeze in order to alleviate these possible losses. In this
case, a ?good? algorithm is an algorithm having not only low expected risk but also small deviations.
Why are we interested in the learning task of doing as well as the best prediction function of a given
finite set? First, one way of doing model selection among a finite family of submodels is to cut the
training set into two parts, use the first part to learn the best prediction function of each submodel
and use the second part to learn a prediction function which performs as well as the best of the
prediction functions learned on the first part of the training set. This scheme is very powerful since
it leads to theoretical results, which, in most situations, would be very hard to prove without it. Our
work here is related to the second step of this scheme.
Secondly, assume we want to predict the value of a continuous variable, and that we have many
candidates for explaining it. An input point can then be seen as the vector containing the prediction
of each candidate. The problem is what to do when the dimensionality d of the input data (equivalently the number of prediction functions) is much higher than the number of training points n. In
this setting, one cannot use linear regression and its variants in order to predict as well as the best
candidate up to a small additive term. Besides, (penalized) empirical risk minimization is doomed
to be suboptimal (see the second part of Theorem 2 and also [1]).
As far as the expected risk is concerned, the only known correct way of predicting as well as the
best prediction function is to use the progressive mixture rule or its variants. These algorithms are
introduced in Section 2 and their main good property is given in Theorem 1. In this work we prove
that they do not work well as far as risk deviations are concerned (see the second part of Theorem
1
3). We also provide a new algorithm for this ?predict as well as the best? problem (see the end of
Section 4).
2
The progressive mixture rule and its variants
We assume that we observe n pairs of input-output denoted Z1 = (X1 , Y1 ), . . . , Zn = (Xn , Yn )
and that each pair has been independently drawn from the same unknown distribution denoted P .
The input and output spaces are denoted respectively X and Y, so that P is a probability distribution
on the product space Z , X ? Y. The quality of a (prediction) function g : X ? Y is measured by
the risk (or generalization error):
R(g) = E(X,Y )?P `[Y, g(X)],
where `[Y, g(X)] denotes the loss (possibly infinite) incurred by predicting g(X) when the true
output is Y . We work under the following assumptions for the data space and the loss function
` : Y ? Y ? R ? {+?}.
Main assumptions. The input space is assumed to be infinite: |X | = +?. The output space is
a non-trivial (i.e. infinite) interval of R symmetrical w.r.t. some a ? R: for any y ? Y, we have
2a ? y ? Y. The loss function is
? uniformly exp-concave: there exists ? > 0 such that for any y ? Y, the set y 0 ? R :
0
`(y, y 0 ) < +? is an interval containing a on which the function y 0 7? e??`(y,y ) is
concave.
? symmetrical: for any y1 , y2 ? Y, `(y1 , y2 ) = `(2a ? y1 , 2a ? y2 ),
? admissible: for any y, y 0 ? Y?]a; +?[, `(y, 2a ? y 0 ) > `(y, y 0 ),
? well behaved at center: for any y ? Y?]a; +?[, the function `y : y 0 7? `(y, y 0 ) is twice
continuously differentiable on a neighborhood of a and `0y (a) < 0.
These assumptions imply that
? Y has necessarily one of the following form: ] ? ?; +?[, [a ? ?; a + ?] or ]a ? ?; a + ?[
for some ? > 0.
? for any y ? Y, from the exp-concavity assumption, the function `y : y 0 7? `(y, y 0 ) is
convex on the interval on which it is finite1 . As a consequence, the risk R is also a convex
function (on the convex set of prediction functions for which it is finite).
The assumptions were motivated by the fact that they are satisfied in the following settings:
? least square loss with bounded outputs: Y = [ymin ; ymax ] and `(y1 , y2 ) = (y1 ?y2 )2 . Then
we have a = (ymin + ymax )/2 and may take ? = 1/[2(ymax ? ymin )2 ].
1
? entropy loss: Y = [0; 1] and `(y1 , y2 ) = y1 log yy21 + (1 ? y1 ) log 1?y
1?y2 . Note that
`(0, 1) = `(1, 0) = +?. Then we have a = 1/2 and may take ? = 1.
? exponential (or AdaBoost) loss: Y = [?ymax ; ymax ] and `(y1 , y2 ) = e?y1 y2 . Then we
2
have a = 0 and may take ? = e?ymax .
? logit loss: Y = [?ymax ; ymax ] and `(y1 , y2 ) = log(1 + e?y1 y2 ). Then we have a = 0 and
2
may take ? = e?ymax .
Progressive indirect mixture rule. Let G be a finite reference set of prediction functions. Under the
previous assumptions, the only known algorithms satisfying (1) are the progressive indirect mixture
rules defined below.
For any i ? {0, . . . , n}, the cumulative loss suffered by the prediction function g on the first i pairs
of input-output is
Pi
?i (g) , j=1 `[Yj , g(Xj )],
1
??`y
Indeed, if ? denotes the
, from Jensen?s inequality, for any probability distribution,
function e
E`y (Y ) = E ? ?1 log ?(Y ) ? ? ?1 log E?(Y ) ? ? ?1 log ?(EY ) = `y (EY ).
2
where by convention we take ?0 ? 0. Let ? denote the uniform distribution on G. We define the
probability distribution ?
?i on G as
?
?i ? e???i ? ?
P
0
equivalently for any g ? G, ?
?i (g) = e???i (g) /( g0 ?G e???i (g ) ). This distribution concentrates
? i be a prediction
on functions having low cumulative loss up to time i. For any i ? {0, . . . , n}, let h
function such that
? (x, y) ? Z
? i (x)] ? ? 1 log Eg??? e??`[y,g(x)] .
`[y, h
i
?
(2)
The progressive indirect mixture rule produces the prediction function
Pn ?
1
g?pim = n+1
i=0 hi .
? i does exist since one may
From the uniform exp-concavity assumption and Jensen?s inequality, h
?
take hi = Eg???i g. This particular choice leads to the progressive mixture rule, for which the
predicted output for any x ? X is
P
Pn
???i (g)
1
P e
g?pm (x) = g?G n+1
g(x).
0
???
(g
)
i=0
i
e
g 0 ?G
Consequently, any result that holds for any progressive indirect mixture rule in particular holds for
the progressive mixture rule.
The idea of a progressive mean of estimators has been introduced by Barron ([2]) in the context
of density estimation with Kullback-Leibler loss. The form g?pm is due to Catoni ([3]). It was also
independently proposed in [4]. The study of this procedure was made in density estimation and least
square regression in [5, 6, 7, 8]. Results for general losses can be found in [9, 10]. Finally, the
progressive indirect mixture rule is inspired by the work of Vovk, Haussler, Kivinen and Warmuth
[11, 12, 13] on sequential prediction and was studied in the ?batch? setting in [10]. Finally, in the
upper bounds we state, e.g. Inequality (1), one should notice that there is no constant larger than 1
in front of ming?G R(g), as opposed to some existing upper bounds (e.g. [14]). This work really
studies the behaviour of the excess risk, that is the random variable R(?
g ) ? ming?G R(g).
The largest integer smaller or equal to the logarithm in base 2 of x is denoted by blog2 xc .
3
Expectation convergence rate
The following theorem, whose proof is omitted, shows that the expectation convergence rate of any
progressive indirect mixture rule is (i) at least (log |G|)/n and (ii) cannot be uniformly improved,
even when we consider only probability distributions on Z for which the output has almost surely
two symmetrical values (e.g. {-1;+1} classication with exponential or logit losses).
Theorem 1 Any progressive indirect mixture rule satisfies
ER(?
gpim ) ? min R(g) +
g?G
log |G|
?(n+1) .
Let y1 ? Y ? {a} and d be a positive integer. There exists a set G of d prediction functions such that:
for any learning algorithm, there exists a probability distribution generating the data for which
? the output marginal is supported by 2a ? y1 and y1 : P (Y ? {2a ? y1 ; y1 }) = 1,
2 |G|c
? ER(?
g ) ? min R(g) + e?1 ? 1 ? blogn+1
, with ? , sup [`(y1 , a) ? `(y1 , y)] > 0.
g?G
y?Y
The second part of Theorem 1 has the same (log |G|)/n rate as the lower bounds obtained in sequential prediction ([12]). From the link between sequential predictions and our ?batch? setting with i.i.d.
data (see e.g. [10, Lemma 3]), upper bounds for sequential prediction lead to upper bounds for i.i.d.
data, and lower bounds for i.i.d. data leads to lower bounds for sequential prediction. The converse
of this last assertion is not true, so that the second part of Theorem 1 is not a consequence of the
lower bounds of [12].
3
The following theorem, whose
p proof is also omitted, shows that for appropriate set G: (i) the empirical risk minimizer has a (log |G|)/n expectation convergence rate, and (ii) any empirical risk
minimizer and any of its penalized variants are really
ppoor algorithms in our learning task since their
expectation convergence rate cannot be faster than (log |G|)/n (see [5, p.14] and [1] for results of
the same spirit). This last point explains the interest we have in progressive mixture rules.
Theorem 2 If B , supy,y0 ,y00 ?Y [`(y, y 0 ) ? `(y, y 00 )] < +?, then any empirical risk minimizer,
which produces a prediction function g?erm in argming?G ?n , satisfies:
q
ER(?
germ ) ? min R(g) + B 2 logn |G| .
g?G
Let y1 , y?1 ? Y?]a; +?[ and d be a positive integer. There exists a set G of d prediction functions
such that: for any learning algorithm producing a prediction function in G (e.g. g?erm ) there exists a
probability distribution generating the data for which
? the output marginal is supported by 2a ? y1 and y1 : P (Y ? {2a ? y1 ; y1 }) = 1,
q
blog2 |G|c
? ER(?
g ) ? min R(g) + 8?
?
2
, with ? , `(y1 , 2a ? y?1 ) ? `(y1 , y?1 ) > 0.
n
g?G
The lower bound of Theorem 2 also says that one should not use cross-validation. This holds for the
loss functions considered in this work, and not for, e.g., the classification loss: `(y, y 0 ) = 1y6=y0 .
4
Deviation convergence rate
The following theorem shows
? that the deviation convergence rate of any progressive indirect mixture rule is (i) at least 1/ n and (ii) cannot be uniformly improved, even when we consider only
probability distributions on Z for which the output has almost surely two symmetrical values (e.g.
{-1;+1} classication with exponential or logit losses).
Theorem 3 If B , supy,y0 ,y00 ?Y [`(y, y 0 ) ? `(y, y 00 )] < +?, then any progressive indirect mixture
rule satisfies: for any > 0, with probability at least 1 ? w.r.t. the training set distribution, we
have
q
?1 )
log |G|
R(?
gpim ) ? min R(g) + B 2 log(2
+ ?(n+1)
n+1
g?G
Let y1 and y?1 in Y?]a; +?[ such that `y1 is twice continuously differentiable on [a; y?1 ] and
`0y1 (y?1 ) ? 0 and `00y1 (y?1 ) > 0. Consider the prediction functions g1 ? y?1 and g2 ? 2a ? y?1 .
For any training set size n large enough, there exist > 0 and a distribution generating the data
such that
? the output marginal is supported by y1 and 2a ? y1
? with probability larger than , we have
R(?
gpim ) ?
min
g?{g1 ,g2 }
q
?1 )
R(g) ? c log(e
n
where c is a positive constant depending only on the loss function, the symmetry parameter
a and the output values y1 and y?1 .
Proof 1 See Section 5.
This result is quite surprising since it gives an example of an algorithm which is optimal in terms of
expectation convergence rate and for which the deviation convergence rate is (significantly) worse
than the expectation convergence rate.
In fact, despite their popularity based on their unique expectation convergence rate, the progressive
mixture rules are not good algorithms since a long argument essentially based on convexity shows
that the following algorithm has both expectation and deviation convergence rate of order 1/n. Let
4
g?erm be the minimizer of the empirical risk among functions in G. Let g? be the minimizer of the
empirical risk in the star G? = ?g?G [g; g?erm ]. The algorithm producing g? satisfies for some C > 0,
for any > 0, with probability at least 1 ? w.r.t. the training set distribution, we have
?1
R(?
g ) ? min R(g) + C log( n
g?G
|G|)
.
This algorithm has also the benefit of being parameter-free. On the contrary, in practice, one will
have recourse to cross-validation to tune the parameter ? of the progressive mixture rule.
To summarize, to predict as well as the best prediction function in a given set G, one should not
restrain the algorithm to produce its prediction function among the set G. The progressive mixture rules satisfy this principle since they produce a prediction function in the convex hull of G.
This allows to achieve (log |G|)/n convergence rates in expectation. The proof of the lower bound
of Theorem 3 shows that the progressive mixtures overfit the data: the deviations of their excess
risk are not PAC bounded by C log(?1 |G|)/n while an appropriate algorithm producing prediction
functions on the edges of the convex hull achieves the log(?1 |G|)/n deviation convergence rate.
Future work might look at whether one can transpose this algorithm to the sequential prediction
setting, in which, up to now, the algorithms to predict as well as the best expert were dominated by
algorithms producing a mixture expert inside the convex hull of the set of experts.
5
5.1
Proof of Theorem 3
Proof of the upper bound
Let Zn+1 = (Xn+1 , Yn+1 ) be an input-output pair independent from the training set Z1 , . . . , Zn
and with the same distribution P . From the convexity of y 0 7? `(y, y 0 ), we have
Pn
1
?
(3)
R(?
gpim ) ? n+1
i=0 R(hi ).
Now from [15, Theorem 1] (see also [16, Proposition 1]), for any > 0, with probability at least
1 ? , we have
q
Pn
1
? i ) ? 1 Pn ` Yi+1 , h(X
? i+1 ) + B log(?1 )
(4)
R(h
n+1
i=0
n+1
i=0
2(n+1)
Using [12, Theorem 3.8] and the exp-concavity assumption, we have
Pn
? i+1 ) ? min Pn ` Yi+1 , g(Xi+1 ) +
` Yi+1 , h(X
i=0
i=0
g?G
log |G|
?
Let g? ? argminG R. By Hoeffding?s inequality, with probability at least 1 ? , we have
q
Pn
log(?1 )
1
`
Y
,
g
?
(X
)
?
R(?
g
)
+
B
i+1
i+1
i=0
n+1
2(n+1)
(5)
(6)
Merging (3), (4), (5) and (6), with probability at least 1 ? 2, we get
q
Pn
?1 )
log |G|
1
R(?
gpim ) ? n+1
?(Xi+1 ) + ?(n+1)
+ B log(
i=0 ` Yi+1 , g
2(n+1)
q
?1 )
log |G|
? R(?
g ) + B 2 log(
+ ?(n+1)
.
n+1
5.2
Sketch of the proof of the lower bound
We cannot use standard tools like Assouad?s argument (see e.g. [17, Theorem 14.6]) because if it
were possible, it would mean that the lower bound would hold for any algorithm and in particular
for g?, and this is false. To prove that any progressive indirect mixture rule have no fast exponential
deviation inequalities, we will show that on some event with not too small probability, for most of
the i in {0, . . . , n}, ????i concentrates on the wrong function.
The proof is organized as follows. First we define the probability distribution for which we will
prove that the progressive indirect mixture rules cannot have fast deviation convergence rates. Then
we define the event on which the progressive indirect mixture rules do not perform well. We lower
bound the probability of this excursion event. Finally we conclude by lower bounding R(?
gpim ) on
the excursion event.
Before starting the proof, note that from the ?well behaved at center? and exp-concavity assumptions, for any y ? Y?]a; +?[, on a neighborhood of a, we have: `00y ? ?(`0y )2 and since `0y (a) < 0,
y1 and y?1 exist. Due to limited space, some technical computations have been removed.
5
5.2.1
Probability distribution generating the data and first consequences.
Let ? ?]0; 1] be a parameter to be tuned later. We consider a distribution generating the data such
that the output distribution satisfies for any x ? X
P (Y = y1 |X = x) = (1 + ?)/2 = 1 ? P (Y = y2 |X = x),
where y2 = 2a ? y1 . Let y?2 = 2a ? y?1 . From the symmetry and admissibility assumptions, we have
`(y2 , y?2 ) = `(y1 , y?1 ) < `(y1 , y?2 ) = `(y2 , y?1 ). Introduce
? , `(y1 , y?2 ) ? `(y1 , y?1 ) > 0.
We have
R(g2 ) ? R(g1 ) =
1+?
2 [`(y1 , y?2 )
? `(y1 , y?1 )] +
1??
2 [`(y2 , y?2 )
(7)
? `(y2 , y?1 )] = ??.
(8)
Therefore g1 is the best prediction function in {g1 , g2 } for the distribution we have chosen. Introduce
Pi
Wj , 1Yj =y1 ? 1Yj =y2 and Si , j=1 Wj . For any i ? {1, . . . , n}, we have
Pi
Pi
?i (g2 ) ? ?i (g1 ) = j=1 [`(Yj , y?2 ) ? `(Yj , y?1 )] = j=1 Wj ? = ? Si
The weight given by the Gibbs distribution ????i to the function g1 is
????i (g1 ) =
5.2.2
e???i (g1 )
e???i (g1 ) +e???i (g2 )
=
1
1+e?[?i (g1 )??i (g2 )]
=
1
1+e???Si
.
(9)
An excursion event on which the progressive indirect mixture rules will not perform
well.
Equality (9) leads us to consider the event:
E? = ?i ? {?, . . . , n}, Si ? ?? ,
with ? the smallest integer larger than (log n)/(??) such that n ? ? is even (for convenience). We
have
log n
log n
(10)
?? ? ? ? ?? + 2.
The event E? can be seen as an excursion event of the random walk defined through the random
variables Wj = 1Yj =y1 ? 1Yj =y2 , j ? {1, . . . , n}, which are equal to +1 with probability (1 + ?)/2
and ?1 with probability (1 ? ?)/2.
From (9), on the event E? , for any i ? {?, . . . , n}, we have
????i (g1 ) ?
1
n+1 .
(11)
This means that ????i concentrates on the wrong function, i.e. the function g2 having larger risk
(see (8)).
5.2.3
Lower bound of the probability of the excursion event.
This requires to look at the probability that a slightly shifted random walk in the integer space has a
very long excursion above a certain threshold. To lower bound this probability, we will first look at
the non-shifted random walk. Then we will see that for small enough shift parameter, probabilities
of shifted random walk events are close to the ones associated to the non-shifted random walk.
Let N be a positive integer. Let ?1 , . . . , ?N be N independent Rademacher variables: P(?i =
Pi
+1) = P(?i = ?1) = 1/2. Let si , j=1 ?i be the sum of the first i Rademacher variables. We
start with the following lemma for sums of Rademacher variables (proof omitted).
Lemma 1 Let m and t be positive integers. We have
P max sk ? t; sN 6= t; sN ? t ? m = 2P t < sN ? t + m
1?k?N
(12)
0
Let ?10 , . . . , ?N
be N independent shifted Rademacher variables to the extent that P(?i0 = +1) =
(1 + ?)/2 = 1 ? P(?i0 = ?1). These random variables satisfy the following key lemma (proof
omitted)
6
PN
Lemma 2 For any set A ? (1 , . . . , N ) ? {?1, 1}n : i=1 i ? M where M is a positive
integer, we have
M/2
N/2
0
(13)
P (?10 , . . . , ?N
) ? A ? 1??
1 ? ?2
P (?1 , . . . , ?N ) ? A
1+?
We may now lower bound the probability of the excursion event E? . Let M be an integer larger than
? . We still use Wj , 1Yj =y1 ? 1Yj =y2 for j ? {1, . . . , n}. By using Lemma 2 with N = n ? 2? ,
we obtain
Pi
P(E? ) ? P W1 = ?1, . . . , W2? = ?1; ? 2? < i ? n,
j=2? +1 Wj ? ?
Pi
1?? 2?
0
=
P ? i ? {1, . . . , N }
j=1 ?j ? ?
2
N
1?? 2? 1?? M/2
1 ? ? 2 2 P |sN | ? M ; ? i ? {1, . . . , N } si ? ?
?
2
1+?
By using Lemma 1, since ? ? M , the r.h.s. probability can be lower bounded, and after some
computations, we obtain
2? 1?? M/2
N
(14)
P(E? ) ? ? 1??
1 ? ? 2 2 [P(sN = ? ) ? P(sN = M )]
2
1+?
where we recall that ? have the order of log n, N = n ? 2? has the order of n and that ? > 0 and
M ? ? have to be appropriately chosen.
To control the probabilities of the r.h.s., we use Stirling?s formula
?
?
nn e?n 2?n e1/(12n+1) < n! < nn e?n 2?n e1/(12n) ,
and get for any s ? [0; N ] such that N ? s even,
q
? N2 s 2s
1
1
1? N
2
s2
P(sN = s) ?
1
?
e? 6(N +s) ? 6(N ?s)
?N
N2
1+ s
(15)
(16)
N
and similarly
P(sN = s) ?
q
2
?N
1?
s2
N2
? N2
s
1? N
s
1+ N
2s
1
e 12N +1 .
(17)
?
These computations and (14) leads us to take M as the smallest integer
?larger than n such that
n ? M is even. Indeed,
p from (10), (16) and (17), we obtain limn?+? n[P(sN = ? ) ? P(sN =
M )] = c, where c = 2/? 1 ? e?1/2 > 0. Therefore for n large enough we have
N
1?? 2? 1?? M/2
2 2
?
(18)
P(E? ) ? 2c?
1
?
?
2
1+?
n
?
The last two terms of the r.h.s. of (18) leads us to take ? of order 1/ n up to possibly a logarithmic
term. We obtain the following lower bound on the excursion probability
p
Lemma 3 If ? = C0 (log n)/n with C0 a positive constant, then for any large enough n,
P(E? ) ?
1
nC0
.
5.2.4
Behavior of the progressive indirect mixture rule on the excursion event.
Pn ?
From now on, we work on the event E? . We have g?pim = ( i=0 h
i )/(n + 1). We still use ? ,
`(y1 , y?2 )?`(y1 , y?1 ) = `(y2 , y?1 )?`(y2 , y?2 ). On the event E? , for any x ? X and any i ? {?, . . . , n},
? i , we have
by definition of h
? i (x)] ? `(y2 , y?2 )
`[y2 , h
)}
? ? ?1 log E
e??{`[y2 ,g(x)]?`(y2 ,y?2
???i
g??
1
???
???
= ? ? log e
+ (1 ? e
)? ???i (g2 )
1
? ? ?1 log 1 ? (1 ? e??? ) n+1
? i (x)] ? `(y2 , y?2 ) ? Cn?1 , with C > 0
In particular, for any n large enough, we have `[y2 , h
independent from ?. From the convexity of the function y 7? `(y2 , y) and by Jensen?s inequality,
we obtain
Pn
1
??
?1
?
`[y2 , g?pim (x)] ? `(y2 , y?2 ) ? n+1
< C1 logn n
i=0 `[y2 , hi (x)] ? `(y2 , y?2 ) ? n+1 + Cn
7
for some constant C1 > 0 independent from ?. Let us now prove that for n large enough, we have
q
(19)
y?2 ? g?pim (x) ? y?2 + C logn n ? y?1 ,
with C > 0 independent from ?.
From (19), we obtain
R(?
gpim ) ? R(g1 )
= 1+?
?pim ) ? `(y1 , y?1 ) + 1??
`(y2 , g?pim ) ? `(y2 , y?1 )
2 `(y1 , g
2
= 1+?
gpim ) ? `y1 (y?1 ) + 1??
`y1 (2a
2 `y1 (?
2
? g?pim ) ? `y1 (y?2 )
1+?
1??
=
? + `y1 (?
gpim ) ? `y1 (y?2 ) + 2 ? ? + `y1 (2a ? g?pim ) ? `y1 (y?1 )
2
0
? ?? ? (?
gpim
q ? y?2 )|`y1 (y?2 )|
? ?? ? C2
log n
n ,
(20)
p
with C2 independent from ?. We may take ? = 2C? 2 (log n)/n and obtain: for n large enough,
p
on the event E? , we have R(?
gpim ) ? R(g1 ) ? C log n/n. From Lemma 3, this inequality holds
with probability at least 1/nC4 for some C4 > 0. To conclude,
q for any n large enough, there exists
?1
)
. where c is a positive constant
> 0 s.t. with probability at least , R(?
gpim ) ? R(g1 ) ? c log(e
n
depending only on the loss function, the symmetry parameter a and the output values y1 and y?1 .
References
[1] G. Lecu?e. Suboptimality of penalized empirical risk minimization in classification. In Proceedings of the
20th annual conference on Computational Learning Theory, 2007.
[2] A. Barron. Are bayes rules consistent in information? In T.M. Cover and B. Gopinath, editors, Open
Problems in Communication and Computation, pages 85?91. Springer, 1987.
[3] O. Catoni. A mixture approach to universal model selection. preprint LMENS 97-30, Available from
http://www.dma.ens.fr/edition/preprints/Index.97.html, 1997.
[4] A. Barron and Y. Yang. Information-theoretic determination of minimax rates of convergence. Ann. Stat.,
27(5):1564?1599, 1999.
[5] O. Catoni. Universal aggregation rules with exact bias bound. Preprint n.510, http://www.proba.
jussieu.fr/mathdoc/preprints/index.html\#1999, 1999.
[6] G. Blanchard. The progressive mixture estimator for regression trees. Ann. Inst. Henri Poincar?e, Probab.
Stat., 35(6):793?820, 1999.
[7] Y. Yang. Combining different procedures for adaptive regression. Journal of multivariate analysis,
74:135?161, 2000.
[8] F. Bunea and A. Nobel. Sequential procedures for aggregating arbitrary estimators of a conditional mean,
2005. Technical report.
[9] A. Juditsky, P. Rigollet, and A.B. Tsybakov. Learning by mirror averaging. Preprint n.1034, Laboratoire
de Probabilit?es et Mod`eles Al?eatoires, Universit?es Paris 6 and Paris 7, 2005.
[10] J.-Y. Audibert. A randomized online learning algorithm for better variance control. In Proceedings of the
19th annual conference on Computational Learning Theory, pages 392?407, 2006.
[11] V.G. Vovk. Aggregating strategies. In Proceedings of the 3rd annual workshop on Computational Learning Theory, pages 371?386, 1990.
[12] D. Haussler, J. Kivinen, and M. K. Warmuth. Sequential prediction of individual sequences under general
loss functions. IEEE Trans. on Information Theory, 44(5):1906?1925, 1998.
[13] V.G. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, pages
153?173, 1998.
[14] M. Wegkamp. Model selection in nonparametric regression. Ann. Stat., 31(1):252?273, 2003.
[15] T. Zhang. Data dependent concentration bounds for sequential prediction algorithms. In Proceedings of
the 18th annual conference on Computational Learning Theory, pages 173?187, 2005.
[16] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms.
IEEE Transactions on Information Theory, 50(9):2050?2057, 2004.
[17] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag,
1996.
8
| 3303 |@word logit:3 c0:2 open:1 ecole:1 tuned:1 existing:1 enpc:1 surprising:1 si:6 additive:2 juditsky:1 warmuth:2 zhang:1 c2:2 prove:5 inside:1 introduce:2 expected:4 indeed:2 behavior:1 inspired:1 ming:3 project:1 bounded:4 what:1 concave:2 finance:1 universit:1 wrong:2 control:2 converse:1 yn:2 producing:4 positive:8 before:1 aggregating:2 consequence:3 despite:1 ponts:1 fluctuation:2 lugosi:1 might:1 twice:2 studied:1 limited:1 unique:1 yj:9 practice:1 germ:1 procedure:3 poincar:1 probabilit:1 universal:2 empirical:7 significantly:1 orfi:1 get:2 cannot:6 convenience:1 selection:3 close:1 risk:17 context:1 preprints:2 www:2 center:2 starting:1 independently:2 convex:6 rule:27 estimator:3 haussler:2 submodel:1 argming:2 play:1 exact:1 satisfying:1 recognition:1 cut:1 role:1 preprint:3 eles:1 wj:6 removed:1 convexity:3 efficiency:1 indirect:14 fast:2 neighborhood:2 jean:1 whose:2 larger:6 quite:1 say:1 ability:1 eatoires:1 g1:15 online:1 sequence:1 differentiable:2 product:1 fr:3 combining:1 achieve:2 ymax:9 convergence:17 rademacher:4 produce:4 generating:5 depending:2 stat:3 measured:1 predicted:1 convention:1 concentrate:3 drawback:1 correct:1 hull:3 explains:1 behaviour:1 generalization:3 really:2 alleviate:1 proposition:1 secondly:1 hold:5 y00:2 considered:1 exp:5 predict:5 achieves:1 smallest:3 omitted:4 estimation:2 largest:1 tool:1 bunea:1 minimization:2 pn:12 inst:1 dependent:1 i0:2 nn:2 typically:1 willow:1 france:1 interested:1 among:3 classification:2 html:2 denoted:4 logn:3 marginal:3 field:1 equal:2 having:3 progressive:28 y6:1 look:3 future:1 report:1 pim:8 individual:1 consisting:1 proba:1 interest:1 mixture:29 edge:1 tree:1 logarithm:1 walk:5 theoretical:1 instance:1 assertion:1 cover:1 zn:3 stirling:1 deviation:14 uniform:2 front:1 too:1 density:2 randomized:1 probabilistic:1 wegkamp:1 continuously:2 w1:1 satisfied:2 cesa:1 containing:2 opposed:1 possibly:2 hoeffding:1 worse:1 expert:4 de:2 gy:1 star:1 summarized:1 blanchard:1 satisfy:2 audibert:3 later:1 lab:1 doing:2 sup:1 start:1 bayes:1 aggregation:1 yves:1 square:3 variance:1 definition:1 proof:11 associated:1 recall:1 dimensionality:1 organized:1 higher:1 adaboost:1 improved:2 overfit:1 sketch:1 quality:1 behaved:2 true:2 y2:35 equality:1 leibler:1 eg:2 game:1 suboptimality:1 theoretic:1 performs:1 rigollet:1 doomed:1 freeze:1 gibbs:1 rd:1 pm:2 similarly:1 money:1 base:1 multivariate:1 certain:1 verlag:1 inequality:7 lecu:1 yi:4 seen:2 gentile:1 ey:2 surely:2 ii:3 technical:2 faster:1 determination:1 cross:2 long:2 e1:2 bigger:1 prediction:33 variant:4 regression:5 essentially:1 expectation:11 c1:2 want:1 interval:3 laboratoire:1 suffered:1 limn:1 appropriately:1 w2:1 contrary:1 spirit:1 mod:1 integer:10 ee:1 yang:2 enough:8 concerned:3 xj:1 suboptimal:2 idea:1 cn:2 shift:1 whether:1 motivated:1 suffer:1 tune:1 nonparametric:1 blog2:2 tsybakov:1 http:2 exist:3 notice:1 shifted:5 popularity:1 key:2 restrain:1 threshold:1 drawn:1 sum:2 powerful:1 family:1 reasonable:1 almost:2 submodels:1 excursion:9 bound:20 hi:4 annual:4 dominated:1 argument:2 min:8 smaller:1 slightly:1 y0:3 certis:2 erm:4 recourse:1 end:1 available:1 observe:1 barron:3 appropriate:3 batch:2 denotes:5 xc:1 g0:1 marne:1 strategy:1 concentration:1 link:1 extent:1 trivial:1 nobel:1 besides:1 devroye:1 index:2 equivalently:2 unknown:1 perform:2 bianchi:1 upper:5 finite:5 situation:1 communication:1 precise:1 y1:60 vall:1 arbitrary:1 introduced:2 pair:4 paris:2 z1:2 c4:1 learned:1 trans:1 below:1 pattern:1 summarize:1 max:1 event:16 predicting:3 kivinen:2 minimax:1 scheme:2 imply:1 ymin:3 sn:10 probab:1 loss:22 admissibility:1 validation:2 gopinath:1 incurred:1 supy:2 consistent:1 principle:1 editor:1 bank:1 pi:7 classication:2 penalized:3 surprisingly:1 supported:3 last:3 free:1 transpose:1 finite1:1 bias:1 explaining:1 benefit:1 xn:2 cumulative:2 concavity:4 made:1 adaptive:1 far:2 transaction:1 excess:2 henri:1 kullback:1 symmetrical:4 assumed:1 conclude:2 xi:2 continuous:1 sk:1 why:2 learn:2 symmetry:3 necessarily:1 main:2 bounding:1 s2:2 edition:1 n2:4 dma:1 x1:1 advice:1 en:1 fails:1 exponential:4 candidate:3 jussieu:1 admissible:1 theorem:17 formula:1 pac:1 er:5 jensen:3 exists:6 workshop:1 false:1 sequential:9 merging:1 mirror:1 catoni:3 entropy:1 logarithmic:1 conconi:1 g2:9 springer:2 minimizer:5 satisfies:6 assouad:1 conditional:1 consequently:1 ann:3 paristech:1 hard:1 cst:3 infinite:3 uniformly:3 vovk:3 averaging:1 lemma:9 e:2 la:1 |
2,541 | 3,304 | Kernels on Attributed Pointsets with Applications
Mehul Parsana1
mehul.parsana@gmail.com
Sourangshu Bhattacharya1
sourangshu@gmail.com
Chiranjib Bhattacharyya1
chiru@csa.iisc.ernet.in
K. R. Ramakrishnan2
krr@ee.iisc.ernet.in
Abstract
This paper introduces kernels on attributed pointsets, which are sets of vectors embedded in an euclidean space. The embedding gives the notion of neighborhood,
which is used to define positive semidefinite kernels on pointsets. Two novel kernels on neighborhoods are proposed, one evaluating the attribute similarity and
the other evaluating shape similarity. Shape similarity function is motivated from
spectral graph matching techniques. The kernels are tested on three real life applications: face recognition, photo album tagging, and shot annotation in video
sequences, with encouraging results.
1
Introduction
In recent times, one of the major challenges in kernel methods has been design of kernels on structured data e.g. sets [9, 17, 15], graphs [8, 3], strings, automata, etc. In this paper, we propose kernels
on a type of structured objects called attributed pointsets [18]. Attributed pointsets are points embedded in a euclidean space with a vector of attributes attached to each point. The embedding of
points in the euclidean space yields a notion of neighborhood of each point which is exploited in
designing new kernels. Also, we describe the notion of similarity between pointsets which model
many real life scenarios and incorporate it in the proposed kernels.
The main contribution of this paper is definition of two different kernels on neighborhoods. These
neighborhood kernels are then used to define kernels on the entire pointsets. The first kernel treats the
neighborhoods as sets of vectors for calculating the similarity. Second kernel calculates similarity
in shape of the two neighborhoods. It is motivated using spectral graph matching techniques [16].
We demonstrate practical applications of the kernels on the well known task of face recognition [20],
and two other novel tasks of tagging photo albums and annotation of shots in video sequences. For
the face recognition task, we test our kernels on benchmark datasets and compare their performance
with state-of-the-art algorithms. Our kernels outperform the existing methods in many cases. The
kernels also perform according to expectation on the two novel applications. Section 2 defines
attributed pointsets and contrasts it with related notions. Section 3 proposes two kernels and section
4 describes experimental results.
2
Definition and related work
An attributed pointset [18, 1] (a.k.a. point pattern) X is sets of points in Rk with attributes or labels
(real vectors in this case) attached to each point. Thus, X = {(xi , di )|i = 1 . . . n}, where xi ? Ru
and di ? Rv , l being the dimension of the attribute vector. The number of points in a pointset,
1
Dept. of Computer Science & Automation, 2 Dept. of Electrical Engineering, Indian Institute of Science,
Bangalore - 560012, India.
1
n, is variable. Also, for practical purposes pointsets with u = 2, 3 are of interest. The construct
of pointsets are richer than sets of vectors [17] because of the structure formed by embedding of
the points in a euclidean space. However, they are less general than attributed graphs because all
attributed graphs cannot be embedded onto a euclidean space. Pointsets are useful in several domains
including computer vision [18], computational biology [5], etc.
The notion of similarity between pointsets is also different from those between sets of vectors,
or graphs. The main aspect of similarity is that there should be correspondences (1-1 mappings)
between the points of a pointset such that the relative positions of corresponding point are same.
Also the attribute vectors of the matching points should be similar. In case of sets of vectors, the
kernel function captures the similarity between aggregate properties of the two sets, such as the
principle angles between spanned subspaces [17], or distance between the distributions generating
the vectors [9]. Kernels on graphs try to capture similarity in the graph topology by comparing the
number of similar paths [3], or comparing steady state distributions on of linear systems on graphs
[8].
For example, consider recognizing faces using local descriptors calculated at some descriptor points
(corner points in this case) on the face. It is necessary that subsets of descriptor points found in two
images of the same face should be approximately superimposable (slight changes may be due to
change of expression) and that the descriptor values for the corresponding points should be roughly
same to ensure similar local features. Thus, a face can be modeled as an attributed pointset X =
{(xi , di )|i = 1 . . . n}, where xi ? R2 is the coordinate of ith descriptor point and di ? Rv is the
local descriptor vector at the ith descriptor point. Similar arguments can be provided for any object
recognition task.
A local descriptor based kernel was proposed for object recognition in similar setting in [12]. SupA
B
B
pose X A = {(xA
= {(xB
nB } are
two pointsets. The
i , di )|i = 1 . . . nA } and X
i , di )|i = 1 . . . P
nA PnB
A
B p
A
B
normalized sum kernel [12] was defined as KN S (X , X ) = nA1nB i=1
j=1 (K(di , dj )) ,
B
where K(dA
i , dj ) is some kernel function on the descriptors. It was argued in [12] that raising
the kernel to a high power p approximately calculates similarity between matched pairs of vectors.
Using the RBF kernel KRBF (x, y) = e?
normalized sum kernels as:
KN S (X A , X B ) =
kx?yk2
?2
, and adjusting the parameter p in ?, we get the
nA X
nB
1 X
B
KRBF (dA
i , dj )
nA nB i=1 j=1
(1)
Observe that this kernel doesn?t use the in formation in xi anywhere, and thus is actually a kernel
on a set of vectors. In fact, this kernelcan be derived as a special case of the set kernel proposed
P
P
T ?
?
in [15]. The kernel K(A, B) = trace
r (A Gr B)Fr becomes K(A, B) =
ij k(ai , bj )fij
P
? r = I and F =
for G
Fr (whose entries are fij ) should be positive semidefinite [15]. Thus,
r
choosing F = 11T (all entries 1) and multiplying the kernel by n2 1n2 and using KRBF as the
A B
kernel on vectors, we get back the kernel defined in (1). The normalized sum kernel is used as the
basic kernel for development and validation of the new kernels proposed here. In the next section,
we incorporate position xi of the points using the concept of neighborhood.
3
3.1
Kernels
Neighborhood kernels
The key idea in this section is to use spatially co-occurring points of a point to improve the similarity
values given by the kernel function. In other words, we hypothesize that similar points from two
pointsets should also have neighboring points which are similar. Thus, for each point we define a
neighborhood of the point and weight the similarity between each pair of points with the similarity
between their neighborhoods.
The k-neighborhood Ni of a point (xi , di ) in a pointset X is defined as the set of points (including
itself) that are closest to it in the embedding euclidean space. So, Ni = {(xj , dj ) ? X|kxi ? xj k ?
A
kxi ? xl k?(xl , dl ) 6? Ni and |Ni | = k}. The neighborhood kernel between two points (xA
i , di )
2
Figure 1: Correspondences implicitly found by sum and neighborhood kernels
B
and (xB
j , dj ) is defined as:
A
B
B
A
B
KN ((xA
i , di ), (xj , dj )) = KRBF (di , dj )?
1
X
X
|NiA ||NjB | A A
B
B
(xs ,ds )?NiA (xB
t ,dt )?Nj
B
KRBF (dA
s , dt )
(2)
The neighborhood kernel (NK) between two pointsets X A and X B is thus defined as:
nA X
nB
X
1
A
B
B
KN ((xA
KN K (X A , X B ) =
?
i , di ), (xj , dj ))
nA nB
i=1 j=1
(3)
It is easy to see that KN K is a positive semidefinite kernel function. Even though KN K is a straightforward extension, it considerably improves accuracy of KN S . Figure 1 shows values of KN S and
KN K for 4 pairs of point from two pointsets modeling faces. Dark blue lines indicate best matches
given by KN S while bright blue lines indicate best matches by the KN K . In both cases, KN K gives
the correct match while the KN S fails. Computational complexity of KN K is O(k 2 n2 ), k being
neighborhood size and n number of points. The next section proposes a kernel which uses positions
of points (xi ) in a neighborhood more strongly to calculate similarity in shape.
3.2
Spectral Neighborhood Kernel
The kernel defined in the previous section still uses a set of vectors kernel for finding similarity
between the neighborhoods. Here, we are interested in a kernel function which evaluates the similarity in relative position of the corresponding points. Since the neighborhoods being compared are
of fixed size, we assume that all points in a neighborhood have a corresponding point in the other.
Thus, the correspondences are given by a permutation of points in one of the neighborhoods. This
problem can be formulated as the weighted graph matching problem [16], for which spectral method
is one of the popular heuristics. We use the features given by spectral decomposition of adjacency
matrix of the neighborhood to define a kernel function.
Given a neighborhood Ni we define its adjacency matrix Ai as Ai (s, t)
=
kxs ?xt k
e? ? , ?s, t|(xs , ds ), (xt , dt ) ? Ni , where ? is a parameter. Given two neighborhoods
NiA and NjB , we are thus interested in a permutation ? of the basis of adjacency matrix of one of
B
the neighborhoods (say NjB ), such that kAA
i ? ?(Aj )kF is minimized, k.kF being the frobenius
norm of a matrix.
It is well known that a matrix can be fully reconstructed from its spectral decomposition. Also, in the
Pk
Pn
case that fewer eigenvectors are used, the equation kA ? i=1 ?i ?i ?iT k2F = j=k+1 ?2j , suggests
that eigenvectors corresponding to the higher eigenvalues will give better reconstruction. We use one
eigenvector corresponding to largest eigenvalue. Thus, the approximate adjacency matrix becomes
A? = ?1 ?1 ?1T .
?B
Let ? ? be the optimal permutation that minimizes kA?A
i ? ?(Aj )kF . Note that here ? applied on a
matrix implies permutation of the basis. It is easy to see that same permutation is induced on basis
3
of the eigenvectors ?jB (1). Call fiA = |?iA (1)| and fjB = |?jB (1)|, the spectral projection vectors
corresponding to neighborhoods NiA and NjB . Here ?iA (1), ?jB (1) are eigenvectors corresponding
?B
to largest eigenvalue of A?A
i , Aj , and |?(1)| is the vector of absolute values of components of ?(1).
f (s) can be thought of as projection of the sth point in the corresponding neighborhood on R1 . It is
equivalent to seek a permutation ? ? which minimizes kfiA ? ?(fjB )k, for comparing neighborhoods
NiA and NjB . The resulting similarity score is:
S(NiA , NjB ) = max T ? kfiA ? ?(fjB )k22
???
(4)
where, T is a threshold for converting the distance measure to similarity, and ? is the set of all
permutations. However, this similarity function is not necessarily positive semidefinite.
To construct a positive semidefinite kernel giving similarity between the vectors fiA and fjB ,
we use the convolution kernel technique [7] on discrete structures. Let x ? X be a
composite object formed using parts from X1 , . . . , Xm . Let R be a relation over X1 ?
? ? ? ? Xm ? X such that R(x1 , . . . , xm , x) is true if x is composed of x1 , . . . , xm . Let
R?1 (x) = (x1 , . . . , xm ) ? X1 ? ? ? ? ? Xm |R(x1 , . . . , xm , x) = true and K1 , . . . , Km be kernels
on X1 , . . . , Xm , respectively. The convolution kernel K over X is defined as:
K(x, y) =
(x1 ,...,xm
1
Haussler [7] showed that if K , . . . , K
X
)?R?1 (x),(y
m
1 ,...,ym
)?R?1 (y)
m
Y
Ki (xi , yi )
(5)
i=1
are symmetric and positive semidefinite, so is K.
For us, let X be the set of all neighborhoods and X1 , . . . , Xm be the sets of spectral projections
of all points from all the neighborhoods. Here, note that even if the same point appears in different neighborhoods, the appearances will be considered to be different because the projections
are relative to the neighborhoods. Since, each neighborhood has size k, in our case m = k. The
relation R is defined as R(f (1), . . . , f (k), NiA ) is true iff the vector (f (1), . . . , f (k)) = ?(fiA )
for some permutation ?. In other words, R(f (1), . . . , f (k), NiA ) is true iff f (1), . . . , f (k) are
spectral projections the points of neighborhood NiA ). Also, let Ki , i = 1 . . . k all be RBF kernels with the same parameter ?. Thus, from the above equation, the convolution kernel becomes
?kfiA ??(fjB )k2
?1 Pl
P
P
A
B
2
?
K(NiA , NjB ) = k! ??? e ? l=1 (fi (l)?fj (?(l))) = k! ??? e
. Dividing by the
2
constant (k!) , we get kernel KSN as:
A
B 2
j )k
1 X ?kfi ??(f
?
KSN (NiA , NjB ) =
e
(6)
k!
???
The spectral kernel (SK) KSK between two pointsets X A and X B is thus defined as:
KSK (X A , X B ) =
nA X
nB
1 X
B
A
B
KRBF (dA
i , dj )KSN (Ni , Nj )
nA nB i=1 j=1
(7)
Following theorem relates KSN (NiA , NjB ) to S(NiA , NjB ) (eqn 4).
Theorem 3.1 Let Ni and Nj be two sub-structures with spectral projection vectors f i and f j . For
large enough value of T such that all points are matched.
lim KSN (Ni , Nj ))? =
??0
e?T S(Ni ,Nj )
e
k!
Proof: Let ? ? be the permutation that gives the optimal score S(Ni , Nj ). By definition, eS(Ni ,Nj ) =
i
?
j
2
eT e?kf ?? (f )k .
?kf i ??(f j )k2
P
1
?
lim??0 (KSN (Ni , Nj ))? = lim??0 ( k!
)?
???(l) e
?1
P
i
j
2
i
(kf
??(f
)k
?kf
?? ? (f j )k2 ) ?
1 ?kf i ?? ? (f j )k2
= k! e
lim??0 (1 + ???\{?? } e ?
)
i
?
j
2
?1 ?kf ?? (f )k
= k! e
4
1-NN
PCA
LEM
AMM
Face-ARG
Sum(eq (1))
NK (eq (3))
SK (eq (7))
Table 1:
Smile
96.3%
94.1%
78.6%
96.0%
97.8%
96.19%
98.09%
99.04%
Recognition accuracy on AR face dataset (section 4.1)
Angry
Scream Glasses
Scarf
Left-Light Right-Light
88.9%
57.0%
48.1%
3.0%
22.2%
17.8%
79.3%
44.4%
32.9%
2.2%
7.4%
7.4%
92.9%
31.3%
74.8%
47.4%
92.9%
91.1%
96.0%
56.0%
80.0%
82.0%
NA
NA
96.3%
66.7%
80.7%
85.2%
98.5%
96.3%
95.23% 83.80% 89.52% 60.00%
86.66%
80.95%
98.09% 85.71% 94.28% 65.71%
92.38%
86.66%
99.04% 86.66% 93.33% 65.71%
90.47%
84.76%
Computational complexity of this kernel is O(k!n2 ), where k is neighborhood size and n is no.
of descriptor points. However, since in practice only small neighborhood sizes are considered, the
computation time doesn?t become prohibitive.
4
Experimental Results
In order to study the effectiveness of proposed kernels for practical visual tasks, we applied them
on three problems. Firstly, the kernels were applied to the well known problem of face recognition
[20], and results on two benchmark datasets (AR and ORL) were compared to existing state-of-theart methods. Next we used the spectral kernel to tag images in personal photo albums using faces
of people present in them. Finally, the spectral kernel was used for annotation of video sequences
using faces of people present.
Attribute For face recognition, faces were modeled as attributed pointsets using local gabor descriptors [10] calculated at the corner points using Harris corner point detector [6]. At each point, gabor
despite for three different scales and four different orientations were calculated. Descriptors for 5
points (4 pixel neighbors and itself) were used for each of the 12 combinations, making a total of
60 descriptors per point. For image tagging and video annotation, faces were modeled as attributed
pointsets using SIFT local descriptors [11], having 128 descriptors per point.
The kernels were implemented in GNU C/C++. LAPACK [2] was used for calculation of eigenvectors and GNU GSL for calculation of permutations. LIBSVM [4] was used as the SVM based
classifier for classifying pointsets. The face detector provided in OpenCV was used for detecting
faces in album images and video frames.
Dataset The AR dataset [13] is composed of color images of 135 people (75 men and 60 women).
The DB includes frontal view images with different facial expressions, illumination conditions, and
occlusion by sunglasses and scarf. After removing persons with corrupted images or missing any of
the 8 types of required images, a total of 105 persons (56 men and 49 women) were selected. All the
images were converted to greyscale and rescaled to 154 ? 115 pixels. The ORL dataset is composed
of 10 images for each of the 40 persons. The images have minor variations in pose, illumination and
scale. All the 400, 112 ? 92 pixel images were used for experiments.
4.1
Face Recognition in AR face DB
The kernels proposed in this paper, were tested pointsets derived from images in AR face DB.
Face recognition was posed as a multiclass classification problem, and SVMs were along with the
proposed kernels. The AR face DB is a standard benchmark dataset, on which a recent comparison
of state of the art methods for face recognition has been given in [14]. In table 1, we have restated
the results provided in [14] along with the results of our kernels. All the results reported in table
1 have been obtained using one normal (no occlusion or change of expression) face image as the
training set.
It can be seen that for all the images showing change of expression (Smile, Angry and Scream),
the pointset kernels outperform existing methods. Also, in case of occlusion of face by glasses, the
5
Table 2: Recognition accuracy on ORL dataset (section 4.2)
# of training images ?
Sum (eq (1))
NK (eq (3))
SK (eq (7))
1
70.83%
71.38%
71.94%
3
92.50%
93.57%
93.92%
5
98.00%
98.00%
98.00%
Figure 2: Representative cluster from tagging of album
pointset kernels give better results than existing methods. However, in case of occlusion by scarf,
the kernel based method do not perform as well as the Face-ARG or AMM. This failure is due to
introduction of a large number of points in the scarf themselves. It was observed that about 50% of
the descriptor points in the faces having scarfs were in the scarf region of the image. Summing the
similarities over such a large number of extra points makes the overall kernel value noisy.
The proposed approach doesn?t perform better than existing methods on images taken under extreme
variation in lighting conditions. This is due to the fact that values of the local descriptors change
drastically with illumination. Also, some of the corner points disappear under different lighting
condition. However, performance of the kernels is comparable to the existing methods, thus demonstrating the effectiveness of modeling faces as attributed pointsets.
4.2
Recognition performance on ORL Dataset
Real life problems in face recognition also show minor variations in pose, which are addressed by
testing the kernels on images in the ORL dataset. The problem was posed as a multiclass classification problem and SVM was used along with the kernels for classification. Table 2 reports the
recognition accuracies of all the three kernels for two different values of parameters, and for 1, 3
and 5 training images.
It can be seen that even with images showing minor variations in pose, the proposed kernels perform
reasonably well. Also, due to change in pose the relative position of points in the pointsets change.
This is reflected in the fact that improvement due to addition of position information in kernels
is minor as compared to those shown in AR dataset. For higher number of training images, the
performance of all the kernels saturate at 98%.
4.3
Tagging images in personal albums based on faces
The problem of tagging images in personal albums with names of people present in them, is a problem of high practical relevance [19]. The spectral kernels were used solve this problem. Images
from publicly available sources like http://www.flickr.com 1 were used for experimentation. Five personal albums having 20 - 55 images each were downloaded and many images had upto
6 people. Face detector from openCV library was used to automatically detect faces in images. Detected faces are cropped and resized to 100 ? 100 px resolution. 47 - 265 such faces detected from
each album. To the best of our knowledge, there are no openly available techniques to benchmark
our method against.
Due to non-availability of training data, the problem of image tagging was posed as a clustering
problem. Faces detected from the images were represented as attributed pointsets using SIFT local
descriptors, and spectral kernel was evaluated on them. A threshold
based clustering scheme was
p
used on the distance metric induced by the kernel (d(x, y) = K(x, x) + K(y, y) ? 2 ? k(x, y)).
Ideally, each cluster thus obtained should represent a person and images containing faces from a
given cluster should be tagged with the name of that person.
1
We intend to make the dataset publicly available if no copyrights are violated
6
Table 3: Face based album tagging
Album no.
1
2
3
4
5
No. of people
(Actual) (Identified)
2
14
6
8
4
4
2
3
2
% Identified
% False +ve
90%
84%
66.66%
83.33%
80.00%
0%
10.52%
8.33%
19.44%
14.70%
Figure 3: Keyframes of a few shots detected with annotation
Table 3 reports results from tagging experiments for five albums. No. of people identified reports
the number clusters having more than one faces, as singleton cluster will always be correct for that
person. Thus, people appearing only once in the entire album are not reported, which reduce the
no. of identified people. % identified and % false +ve are averaged over all clusters detected in the
of correct f aces in the cluster
album, and are calculated for each cluster as: % identif ied = NTo.otal
no. of f aces of the person and
f alse +ves in the cluster
% f alse + ve = T otal
no. of f aces in the cluster . It can be seen that the kernel performs reasonably
well on the dataset. Figure 2 shows a representative cluster with the first 8 images as true +ves and
rest as false +ves.
4.4
Video annotation based on faces
The kernels were also used to perform video shot annotation based faces detected in video sequences. Experimentation was performed on videos from ?News and Public affair? section of
www.archive.org and music videos from www.youtube.com. Video was sampled at 1 frame
per second and experimental methodology was similar section 4.3 was used on the frames.
Figure 3 shows two representative shots from corresponding to two candidates from ?Election 2004,
presidential debate part 2?, and one from ?Westlife- Seasons in the Sun? video. The faces annotating
the shots are shown in the left as thumbnails. It may be noted that for videos, high pose variation did
not reduce accuracy of recognition due to gradual changing of pose. The results on detecting shots
were highly encouraging, thus demonstrating the varied applicability of proposed attributed pointset
kernels.
5
Conclusion
In this article, we propose kernels on attributed pointsets. We define the notion of neighborhood
in an attributed pointset and propose two new kernels. The first kernel evaluates attribute similarities between the neighborhoods and uses the co-occurrence information to improve the performance
of kernels on sets of vectors. The second kernel uses the position information more strongly and
7
matches the shapes of neighborhoods. This kernel function is motivated from spectral graph matching techniques.
The proposed kernels were validated on the well known task on face recognition on two popular
benchmark datasets. Results show that the current kernels perform competitively with the state-ofthe-art techniques for face recognition. The spectral kernel was also used to perform two real life
tasks of tagging images in personal photo albums and annotating shots in videos. The results were
encouraging in both cases.
References
[1] Helmut Alt and Leonidas J. Guibas. Discrete geometric shapes: Matching, interpolation, and
approximation A survey. Technical Report B 96-11, 1996.
[2] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users? Guide. Society for
Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999.
[3] Karsten M. Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In ICDM
?05: Proceedings of the Fifth IEEE International Conference on Data Mining, pages 74?81,
Washington, DC, USA, 2005. IEEE Computer Society.
[4] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001.
Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[5] Ingvar Eidhammer, Inge Jonassen, and William R. Taylor. Structure comparison and structure
patterns. Journal of Computational Biology, 7(5):685?716, 2000.
[6] C. Harris and M.J. Stephens. A combined corner and edge detector. In Proc. of Alvey Vision
Conf., 1988.
[7] David Haussler. Convolution kernels on discrete structures. Technical report, University of
California, Santa Cruz, 1999.
[8] Koji Tsuda Hisashi Kashima and Akihiro Inokuchi. Marginalized kernels between labeled
graphs. In Twentieth International Conference on Machine Learning (ICML), 2003.
[9] Risi Kondor and Tony Jebara. A kernel between sets of vectors. In Twentieth International
Conference on Machine Learning (ICML), 2003.
[10] Tai Sing Lee. Image representation using 2d gabor wavelets. IEEE TPAMI, 18(10):959?971,
1996.
[11] D. Lowe. Distinctive image features from scale-invariant keypoints. Int. Journal of Computer
Vision, 20:91?110, 2003.
[12] Siwei Lyu. Mercer kernels for object recognition with local features. In IEEE CVPR, 2005.
[13] A.M. Martinez and R. Benavente. The ar face database. CVC Technical Report, 24, 1998.
[14] Bo Gun Park, Kyoung Mu Lee, and Sang Uk Lee. Face recognition using face-arg matching.
IEEE TPAMI, 27(12):1982?1988, 2005.
[15] Amnon Shashua and Tamir Hazan. Algebraic set kernels with application to inference over
local image representations. In Neural Information Processing Systems (NIPS), 2004.
[16] Shinji Umeyama. An eigendecomposition approach to weighted graph matching problems.
IEEE transactions on pattern analysis and machine intelligence, 10(5):695?703, 1988.
[17] Lior Wolf and Amnon Shashua. Learning over sets using kernel principal angles. Journal of
Machine Learning Research, (4):913?931, 2003.
[18] Haim J. Wolfson and Isidore Rigoutsos. Geometric hashing: An overview. IEEE Comput. Sci.
Eng., 4(4):10?21, 1997.
[19] L. Zhang, L. Chen, M. Li, and H. Zhang. Automated annotation of human faces in family
albums, 2003.
[20] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld. Face recognition: A literature survey.
ACM Comput. Surv., 35(4):399?458, 2003.
8
| 3304 |@word kondor:1 norm:1 km:1 gradual:1 seek:1 decomposition:2 eng:1 shot:8 bai:1 score:2 existing:6 ka:2 com:4 comparing:3 current:1 gmail:2 cruz:1 shape:6 hypothesize:1 intelligence:1 fewer:1 prohibitive:1 selected:1 affair:1 ith:2 kyoung:1 detecting:2 firstly:1 org:1 zhang:2 five:2 along:3 become:1 tagging:10 karsten:1 nto:1 themselves:1 roughly:1 automatically:1 encouraging:3 actual:1 election:1 becomes:3 iisc:2 provided:3 matched:2 wolfson:1 string:1 eigenvector:1 minimizes:2 finding:1 nj:8 k2:4 classifier:1 uk:1 positive:6 engineering:1 local:10 treat:1 despite:1 path:2 interpolation:1 approximately:2 suggests:1 co:2 kfi:1 averaged:1 blackford:1 practical:4 testing:1 practice:1 thought:1 composite:1 matching:8 projection:6 word:2 gabor:3 get:3 onto:1 cannot:1 nb:7 www:4 equivalent:1 missing:1 pointsets:25 straightforward:1 automaton:1 survey:2 restated:1 resolution:1 haussler:2 spanned:1 embedding:4 notion:6 coordinate:1 variation:5 user:1 us:4 designing:1 pa:1 surv:1 recognition:21 labeled:1 database:1 observed:1 csie:1 akihiro:1 electrical:1 capture:2 calculate:1 region:1 news:1 sun:1 rescaled:1 mu:1 complexity:2 ideally:1 personal:5 distinctive:1 basis:3 represented:1 describe:1 chellappa:1 demmel:1 detected:6 aggregate:1 formation:1 neighborhood:40 choosing:1 whose:1 richer:1 heuristic:1 posed:3 solve:1 say:1 ace:3 annotating:2 cvpr:1 presidential:1 rosenfeld:1 itself:2 noisy:1 sequence:4 eigenvalue:3 tpami:2 propose:3 reconstruction:1 fr:2 neighboring:1 umeyama:1 iff:2 frobenius:1 cluster:11 r1:1 generating:1 object:5 pose:7 ij:1 minor:4 eq:6 dividing:1 pointset:9 implemented:1 indicate:2 implies:1 fij:2 correct:3 attribute:7 human:1 public:1 adjacency:4 argued:1 ied:1 ntu:1 mehul:2 extension:1 pl:1 considered:2 normal:1 guibas:1 mapping:1 bj:1 lyu:1 opencv:2 major:1 purpose:1 proc:1 label:1 krr:1 largest:2 weighted:2 always:1 pn:1 season:1 resized:1 derived:2 validated:1 improvement:1 contrast:1 industrial:1 helmut:1 detect:1 glass:2 inference:1 scream:2 scarf:6 nn:1 entire:2 relation:2 interested:2 pixel:3 arg:3 lapack:2 orientation:1 classification:3 overall:1 proposes:2 development:1 art:3 special:1 ernet:2 construct:2 once:1 having:4 washington:1 biology:2 park:1 k2f:1 icml:2 theart:1 minimized:1 jb:3 report:6 bangalore:1 few:1 njb:10 composed:3 ve:3 occlusion:4 william:1 interest:1 highly:1 mining:1 introduces:1 extreme:1 semidefinite:6 light:2 copyright:1 sorensen:1 xb:3 edge:1 necessary:1 facial:1 euclidean:6 taylor:1 koji:1 tsuda:1 amm:2 modeling:2 ar:8 applicability:1 subset:1 entry:2 recognizing:1 gr:1 reported:2 kn:15 corrupted:1 kxi:2 considerably:1 combined:1 person:7 borgwardt:1 international:3 lee:3 ym:1 na:10 benavente:1 containing:1 woman:2 corner:5 conf:1 chung:1 zhao:1 sang:1 li:1 converted:1 singleton:1 hisashi:1 automation:1 includes:1 availability:1 int:1 leonidas:1 performed:1 try:1 view:1 lowe:1 hazan:1 shashua:2 annotation:8 contribution:1 formed:2 ni:13 accuracy:5 bright:1 descriptor:18 publicly:2 yield:1 ofthe:1 multiplying:1 lighting:2 detector:4 siwei:1 flickr:1 definition:3 against:1 evaluates:2 failure:1 proof:1 attributed:16 di:12 lior:1 sampled:1 dataset:11 adjusting:1 popular:2 lim:4 color:1 improves:1 knowledge:1 actually:1 back:1 appears:1 higher:2 dt:3 hashing:1 reflected:1 methodology:1 evaluated:1 though:1 strongly:2 anderson:1 xa:4 anywhere:1 d:2 eqn:1 defines:1 aj:3 name:2 usa:1 k22:1 normalized:3 concept:1 true:5 tagged:1 spatially:1 symmetric:1 noted:1 steady:1 demonstrate:1 performs:1 fj:1 image:36 novel:3 fi:1 keyframes:1 overview:1 attached:2 slight:1 ai:3 phillips:1 mathematics:1 dj:9 had:1 han:1 similarity:23 yk2:1 etc:2 closest:1 recent:2 showed:1 krbf:6 scenario:1 life:4 yi:1 exploited:1 seen:3 converting:1 shortest:1 stephen:1 rv:2 relates:1 keypoints:1 technical:3 match:4 calculation:2 lin:1 icdm:1 calculates:2 basic:1 vision:3 expectation:1 metric:1 kernel:102 represent:1 addition:1 cropped:1 cvc:1 addressed:1 source:1 alvey:1 extra:1 rest:1 archive:1 induced:2 db:4 greenbaum:1 smile:2 effectiveness:2 call:1 ee:1 easy:2 enough:1 automated:1 xj:4 topology:1 identified:5 reduce:2 idea:1 multiclass:2 amnon:2 motivated:3 expression:4 pca:1 f:3 peter:1 algebraic:1 useful:1 santa:1 eigenvectors:5 dark:1 svms:1 http:2 outperform:2 pnb:1 thumbnail:1 per:3 blue:2 discrete:3 key:1 four:1 threshold:2 demonstrating:2 changing:1 libsvm:3 graph:14 sum:6 angle:2 hammarling:1 dongarra:1 family:1 chih:2 orl:5 comparable:1 ki:2 gnu:2 angry:2 haim:1 correspondence:3 software:1 tag:1 aspect:1 argument:1 px:1 structured:2 according:1 combination:1 describes:1 sth:1 tw:1 making:1 lem:1 alse:2 openly:1 invariant:1 taken:1 chiranjib:1 equation:2 tai:1 cjlin:1 photo:4 fia:3 available:4 experimentation:2 competitively:1 observe:1 spectral:17 upto:1 appearing:1 occurrence:1 kashima:1 clustering:2 ensure:1 tony:1 marginalized:1 calculating:1 music:1 giving:1 risi:1 k1:1 disappear:1 society:2 intend:1 subspace:1 distance:3 sci:1 gun:1 ru:1 modeled:3 greyscale:1 debate:1 trace:1 design:1 perform:7 convolution:4 datasets:3 benchmark:5 sing:1 frame:3 dc:1 supa:1 varied:1 jebara:1 david:1 pair:3 required:1 bischof:1 raising:1 california:1 nip:1 kriegel:1 pattern:3 xm:10 kaa:1 challenge:1 including:2 max:1 video:14 power:1 ia:2 scheme:1 improve:2 sunglass:1 library:2 philadelphia:1 geometric:2 literature:1 kf:9 relative:4 shinji:1 embedded:3 fully:1 permutation:10 ksk:2 men:2 validation:1 eigendecomposition:1 downloaded:1 article:1 principle:1 mercer:1 classifying:1 sourangshu:2 drastically:1 guide:1 institute:1 india:1 neighbor:1 face:49 absolute:1 fifth:1 dimension:1 calculated:4 evaluating:2 tamir:1 doesn:3 transaction:1 reconstructed:1 approximate:1 implicitly:1 summing:1 xi:9 sk:3 table:7 reasonably:2 nia:13 csa:1 du:1 necessarily:1 domain:1 da:4 did:1 pk:1 main:2 edition:1 n2:4 martinez:1 inokuchi:1 x1:10 representative:3 fails:1 position:7 sub:1 xl:2 candidate:1 comput:2 third:1 wavelet:1 rk:1 theorem:2 removing:1 saturate:1 xt:2 jen:1 sift:2 showing:2 r2:1 x:2 svm:2 alt:1 dl:1 false:3 album:16 illumination:3 occurring:1 kx:1 nk:3 chen:1 ingvar:1 appearance:1 twentieth:2 visual:1 bo:1 chang:1 wolf:1 harris:2 acm:1 chiru:1 formulated:1 rbf:2 change:7 youtube:1 principal:1 called:1 ksn:6 kxs:1 total:2 experimental:3 e:1 otal:2 people:9 support:1 relevance:1 indian:1 frontal:1 violated:1 incorporate:2 dept:2 tested:2 |
2,542 | 3,305 | A General Boosting Method and its Application to
Learning Ranking Functions for Web Search
Zhaohui Zheng? Hongyuan Zha? Tong Zhang? Olivier Chapelle? Keke Chen? Gordon Sun?
?
Yahoo! Inc.
701 First Avene
Sunnyvale, CA 94089
{zhaohui,tzhang,chap,kchen,gzsun}@yahoo-inc.com
?
College of Computing
Georgia Institute of Technology
Atlanta, GA 30032
zha@cc.gatech.edu
Abstract
We present a general boosting method extending functional gradient boosting to
optimize complex loss functions that are encountered in many machine learning
problems. Our approach is based on optimization of quadratic upper bounds of the
loss functions which allows us to present a rigorous convergence analysis of the
algorithm. More importantly, this general framework enables us to use a standard
regression base learner such as single regression tree for ?tting any loss function.
We illustrate an application of the proposed method in learning ranking functions
for Web search by combining both preference data and labeled data for training.
We present experimental results for Web search using data from a commercial
search engine that show signi?cant improvements of our proposed methods over
some existing methods.
1
Introduction
There has been much interest in developing machine learning methods involving complex loss functions beyond those used in regression and classi?cation problems [13]. Many methods have been
proposed dealing with a wide range of problems including ranking problems, learning conditional
random ?elds and other structured learning problems [1, 3, 4, 5, 6, 7, 11, 13]. In this paper we
propose a boosting framework that can handle a wide variety of complex loss functions. The proposed method uses a regression black box to optimize a general loss function based on quadratic
upper bounds, and it also allows us to present a rigorous convergence analysis of the method. Our
approach extends the gradient boosting approach proposed in [8] but can handle substantially more
complex loss functions arising from a variety of machine learning problems.
As an interesting and important application of the general boosting framework we apply it to the
problem of learning ranking functions for Web search. Speci?cally, we want to rank a set of documents according to their relevance to a given query. We adopt the following framework: we extract
a set of features x for each query-document pair, and learn a function h(x) so that we can rank the
documents using the values h(x), say x with larger h(x) values are ranked higher. We call such
a function h(x) a ranking function. In Web search, we can identify two types of training data for
learning a ranking function: 1) preference data indicating a document is more relevant than another
with respect to a query [11, 12]; and 2) labeled data where documents are assigned ordinal labels
representing degree of relevancy. In general, we will have both preference data and labeled data for
1
training a ranking function for Web search, leading to a complex loss function that can be handled
by our proposed general boosting method which we now describe.
2
A General Boosting Method
We consider the following general optimization problem:
? = arg min R(h),
h
h?H
(1)
where h denotes a prediction function which we are interested in learning from the data, H is a prechosen function class, and R(h) is a risk functional with respect to h. We consider the following
form of the risk functional R:
n
1X
R(h) =
?i (h(xi,1 ), ? ? ? , h(xi,mi ), yi ),
(2)
n i=1
where ?i (h1 , . . . , hmi , y) is a loss function with respect to the ?rst m i arguments h1 , . . . , hmi .
For example, each function ?i can be a single variable function (mi = 1) such as in regression:
?i (h, y) = (h ? y)2 ; or a two-variable function (mi = 2), such as those in ranking based on pairwise comparisons: ?i (h1 , h2 , y) = max(0, 1 ? y(h1 ? h2 ))2 , where y ? {?1} indicates whether h1
is preferred to h2 or not; or it can be a multi-variable function as used in some structured prediction
problems: ?i (h1 , . . . , hmi , y) = supz ?(y, z) + ?(h, z) ? ?(h, y), where ? is a loss function [13].
Assume we do not have a general solver for the optimization problem (1), but we have a learning
algorithm A which we refer to as regression weak learner. Given any set of data points X =
[x1 , . . . , xk ], with corresponding target values R = [r1 , . . . , rk ], weights W = [w1 , . . . , wk ], and
tolerance ? > 0, the regression weak learner A produces a function g? = A(W, X, R, ?) ? C such
that
k
k
X
X
2
wj (?
g (xj ) ? rj ) ? min
wj (g(xj ) ? rj )2 + ?.
(3)
j=1
g?C
j=1
Our goal is to use this weak learner A to solve the
P original optimization problem (1). Here H =
span(C), i.e., h ? H can be expressed as h(x) = j aj hj (x) with hj ? C.
Friedman [8] proposed a solution when the loss function in (2) can be expressed as
n
X
?i (h(xi )),
R(h) =
(4)
i=1
which he named as gradient boosting. The idea is to estimate the gradient ?? i (h(xi )) using regression at each step with uniform weighting, and update. However, there is no convergence proof.
Following his work, we consider an extension that is more principly motivated, for which a convergence analysis can be obtained. We ?rst rewrite (2) in the more general form:
R(h) = R(h(x1 ), . . . , h(xN )),
(5)
P
1
where N ?
mi . Note that R depends on h only through the function values h(xi ) and from
now on we identify the function h with the vector [h(xi )]. Also the function R is considered to be a
function of N variables.
Our main observation is that for twice differentiable risk functional R, at each tentative solution h k ,
we can expand R(h) around hk using Taylor expansion as
1
R(hk + g) = R(hk ) + ?R(hk )T g + g T ?2 R(h0 )g,
2
where h0 lies between hk and hk + g. The right hand side is almost quadratic, and we can then
replace it by a quadratic upper-bound
1
R(hk + g) ? Rk (g) = R(hk ) + ?R(hk )T g + g T W g,
(6)
2
1
We consider that all xi are different, but some of the xi,mi in (2) might have been identical, hence the
inequality.
2
where W is a diagonal matrix upper bounding
the Hessian between hk and hk + g. If we de?ne
P
rj = ?[?R(hk )]j /wj , then ?g ? C, j wj (g(xj ) ? rj )2 is equal to the above quadratic form
(up to a constant). So g can be found by calling the regression weak learner A. Since at each
step we try to minimize an upper bound Rk of R, if we let the minimum be gk , it is clear that
R(hk + gk ) ? Rk (gk ) ? R(hk ). This means that by optimizing with respect to the problem Rk
that can be handled by A, we also make progress with respect to optimizing R. The algorithm based
on this idea is listed in Algorithm 1 for the loss function in (5).
Convergence analysis of this algorithm can be established using the idea summarized above; see
details in appendix. However, in partice, instead of the quadratic upper bound (which has a theoretical garantee easier to derive), one may also consider minimizing an approximation to the Taylor
expansion, which would be closer to a Newton type method.
Algorithm 1 Greedy Algorithm with Quadratic Approximation
Input: X = [x` ]`=1,...,N
let h0 = 0
for k = 0, 1, 2, . . .
let W = [w` ]`=1,...,N , with either
w` = ? 2 R/?hk (x` )2 or
% Newton-type method with diagonal Hessian
W global diagonal upper bound on the Hessian
% Upper-bound minimization
let R = [r` ]`=1,...,N , where r` = w`?1 ?R/?hk (x` )
pick ?k ? 0
let gk = A(W, X, R, ?k )
pick step-size sk ? 0, typically by line search on R
let hk+1 = hk + sk gk
end
The main conceptual difference between our view and that of Friedman is that he views regression
as a ?reasonable? approximation to the ?rst order gradient ?R, while our work views it as a natural
consequence of second order approximation of the objective function (in which the quadratic term
serve as an upper bound of the Hessian either locally or globally). This leads to algorithmic difference. In our approach, a good choice of the second order upper bound (leading to tighter bound)
may require non-uniform weights W . This is inline with earlier boosting work in which samplereweighting was a central idea. In our framework, the reweighting naturally occurs when we choose
a tight second order approximation. Different reweighting can affect the rate of convergence in our
analysis. The other main difference with Friedman is that he only considered objective functions of
the form (4); we propose a natural extension to the ones of the form (5).
3
Learning Ranking Functions
We now apply Algorithm 1 to the problem of learning ranking functions. We use preference data as
well as labeled data for training the ranking function. For preference data, we use x ? y to mean
that x is preferred over y or x should be ranked higher than y, where x and y are the feature vectors
for corresponding items to be ranked. We denote the set of available preferences as S = {x i ?
yi , i = 1, . . . , N }. In addition to the preference data, there are also labeled data, L = {(z i , li ), i =
1, . . . , n}, where zi is the feature of an item and li is the corresponding numerically coded label.2
We formulate the ranking problem as computing a ranking function h ? H, such that h satis?es as
much as possible the set of preferences, i.e., h(xi ) ? h(yi ), if xi ? yi , i = 1, . . . , N, while at the
same time h(zi ) matches the label li in a sense to be detailed below.
2
Some may argue that, absolute relevance judgments can also be converted to relative relevance judgments.
For example, for a query, suppose we have three documents d 1 , d2 and d3 labeled as perfect, good, and bad,
respectively. We can obtain the following relative relevance judgments: d 1 is preferred over d2 , d1 is preferred
over d3 and d2 is preferred over d3 . However, it is often the case in Web search that for many queries there
only exist documents with a single label and for such kind of queries, no preference data can be constructed.
3
T HE OBJECTIVE FUNCTION . We use the following objective function to measure the empirical risk
of a ranking function h,
R(h) =
N
n
wX
1?w X
(max{0, h(yi ) ? h(xi ) + ? })2 +
(li ? h(zi ))2 .
2 i=1
2 i=1
The objective function consists of two parts: 1) for the preference data part, we introduce a margin
parameter ? and would like to enforce that h(xi ) ? h(yi ) + ? ; if not, the difference is quadratically
penalized; and 2) for the labeled data part, we simply minimize the squared errors. The parameter
w is the relative weight for the preference data and could typically be found by cross-validation.
The optimization problem we seek to solve is h? = argmin h?H R(h), where H is some given
function class. Note that R depends only on the values h(xi ), h(yi ), h(zi ) and we can optimize it
using the general boosting framework discussed in section 2.
Q UADRATIC APPROXIMATION . To this end consider the quadratic approximation (6) for R(h).
For simplicity let us assume that each feature vector xi , yi and zi only appears in S and L once,
otherwise we need to compute appropriately formed averages. We consider
h(xi ), h(yi ), i = 1, . . . , N,
h(zi ), i = 1, . . . , n
as the unknowns, and compute the gradient of R(h) with respect to those unknowns. The components of the negative gradient corresponding to h(zi ) is just li ? h(zi ). The components of the
negative gradient corresponding to h(xi ) and h(yi ), respectively, are
max{0, h(yi ) ? h(xi ) + ? },
? max{0, h(yi ) ? h(xi ) + ? }.
Both of the above equal to zero when h(xi )?h(yi ) ? ? . For the second-order term, it can be readily
veri?ed that the Hessian of R(h) is block-diagonal with 2-by-2 blocks corresponding to h(x i ) and
h(yi ) and 1-by-1 blocks for h(zi ). In particular, if we evaluate the Hessian at h, the 2-by-2 block
equals to
?
?
?
?
1 ?1
0 0
,
,
?1 1
0 0
for xi ? yi with h(xi ) ? h(yi ) < ? and h(xi ) ? h(yi ) ? ? , respectively. We can upper bound the
?rst matrix by the diagonal matrix diag(2, 2) leading to a quadratic upper bound. We summarize
the above derivations in the following algorithm.
Algorithm 2 Boosted Ranking using Successive Quadratic Approximation (QBRank)
Start with an initial guess h0 , for m = 1, 2, . . . ,
1) we construct a training set for ?tting g m (x) by adding the following for each hxi , yi i ? S,
(xi , max{0, hm?1 (yi ) ? hm?1 (xi ) + ? }), (yi , ? max{0, hm?1 (yi ) ? hm?1 (xi ) + ? }),
and
{(zi , li ? hm?1 (zi )), i = 1, . . . , n}.
The ?tting of g m (x) is done by using a base regressor with the above training set; We weigh
the above preference data by w and the labeled data by 1 ? w respectively.
2) forming hm = hm?1 + ?sm gm (x),
where sm is found by line search to minimize the objective function. ? is a shrinkage factor.
The shrinkage factor ? by default is 1, but Friedman [8] reported better results (coming from better
regularization) by taking ? < 1. In general, we choose ? and w by cross-validation. ? could be the
degree of preference if that information is available, e.g., the absolute grade difference between each
prefernce if it is converted from labeled data. Otherwise, we simply set it to be 1.0. When there is
no preference data and the weak regression learner produces a regression tree, QBrank is identical
to Gradient Boosting Trees (GBT) as proposed in [8].
R EMARK . An xi can appear multiple times in Step 1), in this case we use the average gradient
values as the target value for each distinct xi .
4
4
Experiment Results
We carried out several experiments illustrating the properties and effectiveness of QBrank using
combined preference data and labeled data in the context of learning ranking functions for Web
search [3]. We also compared its performance with QBrank using preference data only and several
existing algorithms such as Gradient Boosting Trees [8] and RankSVM [11, 12]. RankSVM is a
preference learning method which learns pair-wise preferences based on SVM approach.
DATA C OLLECTION . We ?rst describe how the data used in the experiments are collected. For
each query-document pair we extracted a set of features to form a feature vector. which consists of
three parts, x = [xQ , xD , xQD ], where 1) the query-feature vector xQ comprises features dependent
on the query q only and have constant values across all the documents d in the document set, for
example, the number of terms in the query, whether or not the query is a person name, etc.; 2)
the document-feature vector xD comprises features dependent on the document d only and have
constant values across all the queries q in the query set, for example, the number of inbound links
pointing to the document, the amount of anchor-texts in bytes for the document, and the language
identity of the document, etc.; and 3) the query-document feature vector x QD which comprises
features dependent on the relation of the query q with respect to the document d, for example, the
number of times each term in the query q appears in the document d, the number of times each term
in the query q appears in the anchor-texts of the document d, etc.
We sampled a set of queries from the query logs of a commercial search engine and generated
a certain number of query-document pairs for each of the queries. A ?ve-level numerical grade
(0, 1, 2, 3, 4) is assigned to each query-document pair based on the degree of relevance. In total
we have 4,898 queries and 105,243 query-document pairs. We split the data into three subsets as
follows: 1) we extract all the queries which have documents with a single label. The set of feature
vectors and the corresponding labels form training set L1 , which contains around 2000 queries
giving rise to 20,000 query-document pairs. (Some single-labeled data are from editorial database,
where each query has a few ideal results with the same label. Other are bad ranking cases submitted
internally and all the documents for a query are labeled as bad. As we will see those type of singlelabeled data are very useful for learning ranking functions); and 2) we then randomly split the
remaining data by queries, and construct a training set L2 containing about 1300 queries and 40,000
query-document pairs and a test set L3 with about 1400 queries and 44,000 query-document pairs.
We use L2 or L3 to generate a set of preference data as follows: given a query q and two documents
dx and dy . Let the feature vectors for (q, dx ) and (q, dy ) be x and y, respectively. If dx has a higher
grade than dy , we include the preference x ? y while if dy has a higher grade than dx , we include
the preference y ? x. For each query, we consider all pairs of documents within the search results
for that query except those with equal grades. This way, we generate around 500,000 preference
pairs in total. We denote the preference data as P2 and P3 corresponding to L2 and L3 , respectively.
E VALUATION M ETRICS . The output of QBrank is a ranking function h which is used to rank the
documents x according to h(x). Therefore, document x is ranked higher than y by the ranking
function h if h(x) > h(y), and we call this the predicted preference. We propose the following two
metrics to evaluate the performance of a ranking function with respect to a given set of preferences
which we considered as the true preferences.
1) Precision at K%: for two documents x and y (with respect to the same query), it is reasonable to
assume that it is easy to compare x and y if |h(x) ? h(y)| is large, and x and y should have about
the same rank if h(x) is close to h(y). Base on this, we sort all the document pairs hx, yi according
to |h(x) ? h(y)|. We call precision at K%, the fraction of non-contradicting pairs in the top K% of
the sorted list. Precision at 100% can be considered as an overall performance measure of a ranking
function.
2) Discounted Cumulative Gain (DCG): DCG has been widely used to assess relevance in the context
of search engines [10]. For a ranked list of N documents (N is set to be 5 in our experiments), we
PN
use the following variation of DCG, DCGN =
i=1 Gi / log2 (i + 1), where Gi represents the
weights assigned to the label of the document at position i. Higher degree of relevance corresponds
to higher value of the weight.
PARAMETERS. There are three parameters in QBrank: ? , ?, and w. In our experiments, ? is the
absolute grade difference between each pair hxi , yi i. We set ? to be 0.05, and w to be 0.5 in our
5
Table 1: Precision at K% for QBrank, GBT, and RankSVM
%K
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
QBrank
0.9446
0.903
0.8611
0.8246
0.7938
0.7673
0.7435
0.7218
0.7015
0.6834
GBT
0.9328
0.8939
0.8557
0.8199
0.7899
0.7637
0.7399
0.7176
0.6977
0.6803
RankSVM
0.8524
0.8152
0.7839
0.7578
0.7357
0.7151
0.6957
0.6779
0.6615
0.6465
experiments. For a fair comparsion, we used single regression tree with 20 leaf nodes as the base
regressor of both GBT and QBrank in our experiments. ? and number of leaf nodes were tuned for
GBT through cross validation. We did not retune them for QBrank.
E XPERIMENTS AND R ESULTS . We are interested in the following questions: 1) How does GBT
using labeled data L2 compare with QBrank or RankSVM using the preference data extracted from
the same labeled data: P2 ? and 2) Is it useful to include single-labeled data L1 in GBT and QBrank?
To this end, we considered the following six experiments for comparison: 1) GBT using L 1 , 2) GBT
using L2 , 3) GBT using L1 ? L2 , 4) RankSVM using P2 , 5) QBrank using P2 , and 6) QBrank using
P2 ? L 1 .
Table 1 presents the precision at K% on data P3 for the ranking function learned from GBT with
labeled training data L2 , and QBrank and RankSVM with the corresponding preference data P2 .
This shows that QBrank outperforms both GBT and RankSVM with respect to the precision at K%
metric.
The DCG-5 for RankSVM using P2 is 6.181 while that for the other ?ve methods are shown in
Figure 1, from which we can see it is useful to include single-labeled data in GBT training. In case
of preference learning, no preference pairs could be extracted from single labeled data. Therefore,
existing methods such as RankSVM, RankNet and RankBoost that are formulated for preference
data only can not take advantage of such data. The QBrank framework can combine preference data
and labeled data in a natural way. From Figure 1, we can see QBrank using combined preference
data and labeled data outperforms both QBrank and RankSVM using preference data only, which
indicates that singled labeled data are also useful to QBrank training. Another observation is that
GBT using labeled data is signi?cantly worse than QBrank using preference data extracted from the
same labeled data3 . The clear convergence trend of QBrank is also demonstrated in Figure 1. Notice
that, we excluded all tied data (pairs of documents with the same grades) when converting preference
data from the absolute relevance judgments, which can be signi?cant information loss, for example
of x1 > x2 , and x3 > x4 . If we know x2 ties with x3 , then we can have the whole ranking
x1 > {x2 , x3 } > x4 . Including tied data could further improve performance of both GBrank and
QBrank.
5
Conclusions and Future Work
We proposed a general boosting method for optimizing complex loss functions. We also applied
the general framework to the problem of learning ranking functions. Experimental results using a
commercial search engine data show that our approach leads to signi?cant improvements. In future
work, 1) we will add regularization to the preference part in the objective function; 2) we plan
to apply our general boosting method to other structured learning problems; and 3) we will also
explore other applications where both preference and labeled data are available for training ranking
functions.
3
a 1% dcg gain is considered sign?cant on this data set for commercial search engines.
6
DCG-5 v. Iterations
6.9
6.85
DCG-5
6.8
6.75
6.7
6.65
6.6
50
100
150
200
250
300
350
400
Iterations (Number of trees)
GBT using L2
GBT using L1
GBT using L1+L2
QBrank using P2
QBrank using P2+L1
Figure 1: DCG v. Iterations. Notice that DCG for RankSVM using P2 is 6.181.
Appendix: Convergence results
We introduce a few de?nitions.
De?nition 1 C is scale-invariant if ?g ? C and ? ? R, ?g ? C.
q P
De?nition 2 kgk W,X = n1 ` w` g(x` )2 .
De?nition 3 Let h ? span(C), then khk W,X = inf
nP
j
|?j | : h =
P
j
o
?j gj /kgj kW,X ; gj ? C .
De?nition 4 Let R(h) be a function of h, an global upper bound M of its Hessian with respect to
2
[W, X] satisfy: ?h, ? and g: R(h + ?g) ? R(h) + ??R(h)T g + ?2 M kgk2W,X .
Although we only consider global upper bounds, it is easy to see that results with respect to local
upper bounds can also be established.
Theorem 1 Consider Algorithm 1, where R is a convex function of h. Let M be an upper bound of
? ? span(C). Let s?k = sk kgk kW,X be the
the Hessian of R. Assume that C is scale-invariant. Let h
?
Pj
P
normalized step-size, aj = i=0 s?i , and bj = i?j (?
si 2?i + M s?2i /2), then
?
?
? W,X
? W,X + aj
khk
khk
?
?
R(hk+1 ) ? R(h)+
max(0,
R(0)?R(
h))+inf
(b
+
(b
0 ? bj+1 )
j+1 ? bk+1 ) .
? W,X + ak
? W,X + ak
j
khk
khk
P
P 2
?
If we choose s?k ? 0 such that k s?k = ? and k (?
sk + s?k ?k ) < ?, then limk?? R(hk ) =
? and the rate of convergence compared to any target h
? ? span(C) only depends
inf h?span(C)
R(h),
?
? W,X , and the sequences {aj } and {bj }.
on khk
The proof is a systematic application of the idea outlined earlier and will be detailed in a separate publication. In practice, one often set the?step size to be a small constant. In particular, for for some ?xed s > 0, we can choose 2?i ? M s2 /2, and sk kgk kW,X = s2 when
R(hkq+ s?k g?k ) ? R(hk ) (?
sk = 0 otherwise). Theorem 1 gives the following bound when
? W,X max(0, R(0) ? R(h))/M
?
k ? khk
s?3 ,
? + 2s
R(hk+1 ) ? R(h)
q
? hk
? W,X M + M s4 .
max(0, R(0) ? R(h))k
7
The convergence results show that in order to have a risk not much worse than any target function
? ? span(C), the approximation function hk does not need to be very complex when the complexity
h
is measured by its 1-norm. It is also important to see that quantities appearing in the generalization
analysis do not depend on the number of samples. These results imply that statistically, Algorithm 1
(with small step-size) has an implicit regularization effect that prevents the procedure from over?ting
the data. Standard empirical process techniques can then be applied to obtain generalization bounds
for Algorithm 1.
References
[1] BALCAN N., B EYGELZIMER A., L ANGFORD J., AND S ORKIN G. Robust Reductions from Ranking to
Classi?cation, manuscript, 2007.
[2] B ERTSEKAS D. Nonlinear programming. Athena Scienti?c, second edition, 1999.
[3] B URGES , C., S HAKED , T., R ENSHAW, E., L AZIER , A., D EEDS , M., H AMILTON , N., AND H ULLEN DER , G. Learning to rank using gradient descent. Proc. of Intl. Conf. on Machine Learning (ICML)
(2005).
[4] D IETTERICH , T. G., A SHENFELTER , A., B ULATOV, Y. Training Conditional Random Fields via Gradient Tree Boosting Proc. of Intl. Conf. on Machine Learning (ICML) (2004).
[5] C LEMENCON S., L UGOSI G., AND VAYATIS N. Ranking and scoring using empirical risk minimization.
Proc. of COLT (2005).
[6] C OHEN , W. W., S CHAPIRE , R. E., AND S INGER , Y. Learning to order things. Journal of Arti?cial
Intelligence Research, Neural Computation, 13, 14431472 (1999).
[7] F REUND , Y., I YER , R., S CHAPIRE , R. E., AND S INGER , Y. An ef?cient boosting algorithm for combining preferences. Journal of Machine Learning Research 4 (2003), 933?969.
[8] F RIEDMAN , J. H. Greedy function approximation: A gradient boosting machine. Annals of Statistics 29,
5 (2001), 1189?1232.
[9] H ERBRICH , R., G RAEPEL , T., AND O BERMAYER , K. Large margin rank boundaries for ordinal regression. 115?132.
[10] JARVELIN , K., AND K EKALAINEN , J. Ir evaluation methods for retrieving highly relevant documents.
Proc. of ACM SIGIR Conference (2000).
[11] J OACHIMS , T. Optimizing search engines using clickthrough data. Proc. of ACM SIGKDD Conference
(2002).
[12] J OACHIMS , T., G RANKA , L., PAN , B., AND G AY, G. Accurately interpreting clickthough data as
implicit feedback. Proc. of ACM SIGIR Conference (2005).
[13] T SOCHANTARIDIS , I., J OACHIMS , T., H OFMANN , T., AND A LTUN , Y. Large margin methods for
structured and interdependent output variables. Journal of Machine Learning Research, 6:1453?1484,
2005.
8
| 3305 |@word kgk:3 illustrating:1 norm:1 relevancy:1 d2:3 seek:1 pick:2 eld:1 arti:1 reduction:1 initial:1 contains:1 tuned:1 document:37 outperforms:2 existing:3 com:1 si:1 dx:4 readily:1 numerical:1 ranka:1 wx:1 cant:4 enables:1 update:1 greedy:2 leaf:2 guess:1 item:2 intelligence:1 xk:1 boosting:18 node:2 preference:39 successive:1 zhang:1 constructed:1 retrieving:1 consists:2 khk:7 combine:1 introduce:2 pairwise:1 multi:1 grade:7 globally:1 chap:1 discounted:1 solver:1 xed:1 kind:1 argmin:1 substantially:1 cial:1 xd:2 tie:1 internally:1 appear:1 local:1 consequence:1 ak:2 black:1 might:1 twice:1 range:1 statistically:1 chapire:2 practice:1 block:4 x3:3 procedure:1 urge:1 empirical:3 ga:1 close:1 ohen:1 risk:6 context:2 optimize:3 demonstrated:1 convex:1 sigir:2 formulate:1 simplicity:1 supz:1 importantly:1 his:1 handle:2 variation:1 tting:3 target:4 commercial:4 suppose:1 gm:1 annals:1 olivier:1 programming:1 us:1 trend:1 nitions:1 labeled:24 database:1 ugosi:1 wj:4 sun:1 weigh:1 hkq:1 complexity:1 comparsion:1 inger:2 depend:1 rewrite:1 tight:1 serve:1 learner:6 derivation:1 distinct:1 describe:2 gbt:17 query:38 h0:4 larger:1 solve:2 widely:1 say:1 otherwise:3 statistic:1 gi:2 singled:1 emark:1 advantage:1 differentiable:1 sequence:1 propose:3 coming:1 relevant:2 combining:2 rst:5 convergence:10 extending:1 r1:1 produce:2 intl:2 perfect:1 inbound:1 illustrate:1 derive:1 measured:1 progress:1 p2:10 predicted:1 signi:4 qd:1 sunnyvale:1 require:1 hx:1 generalization:2 tighter:1 extension:2 around:3 considered:6 ranksvm:12 algorithmic:1 bj:3 pointing:1 adopt:1 proc:6 label:8 minimization:2 rankboost:1 pn:1 hj:2 shrinkage:2 boosted:1 gatech:1 publication:1 improvement:2 rank:6 indicates:2 hk:24 rigorous:2 sigkdd:1 sense:1 dependent:3 typically:2 dcg:9 relation:1 expand:1 interested:2 arg:1 overall:1 colt:1 yahoo:2 plan:1 equal:4 once:1 construct:2 field:1 identical:2 represents:1 x4:2 kw:3 icml:2 jarvelin:1 future:2 np:1 gordon:1 few:2 randomly:1 ve:2 n1:1 friedman:4 atlanta:1 interest:1 satis:1 highly:1 zheng:1 evaluation:1 scienti:1 closer:1 tree:7 taylor:2 theoretical:1 earlier:2 subset:1 uniform:2 prechosen:1 oachims:3 reported:1 combined:2 person:1 cantly:1 systematic:1 regressor:2 w1:1 squared:1 central:1 containing:1 choose:4 worse:2 conf:2 leading:3 li:6 converted:2 de:6 summarized:1 wk:1 inc:2 satisfy:1 ranking:28 depends:3 h1:6 try:1 view:3 zha:2 start:1 sort:1 minimize:3 formed:1 ass:1 ir:1 judgment:4 identify:2 weak:5 accurately:1 cc:1 cation:2 submitted:1 zhaohui:2 ed:1 naturally:1 proof:2 mi:5 sampled:1 gain:2 appears:3 manuscript:1 higher:7 done:1 box:1 just:1 implicit:2 hand:1 web:8 nonlinear:1 reweighting:2 aj:4 name:1 effect:1 normalized:1 true:1 hence:1 assigned:3 regularization:3 excluded:1 ay:1 l1:6 interpreting:1 balcan:1 wise:1 ef:1 functional:4 retune:1 discussed:1 he:4 numerically:1 refer:1 outlined:1 language:1 hxi:2 chapelle:1 l3:3 gj:2 etc:3 base:4 add:1 optimizing:4 inf:3 certain:1 inequality:1 yi:23 der:1 nition:4 scoring:1 minimum:1 speci:1 converting:1 multiple:1 rj:4 match:1 cross:3 coded:1 prediction:2 involving:1 regression:14 metric:2 editorial:1 iteration:3 vayatis:1 addition:1 want:1 appropriately:1 veri:1 limk:1 thing:1 effectiveness:1 call:3 ideal:1 split:2 easy:2 variety:2 xj:3 affect:1 zi:11 idea:5 whether:2 motivated:1 handled:2 six:1 hessian:8 ranknet:1 useful:4 clear:2 listed:1 detailed:2 amount:1 s4:1 locally:1 generate:2 exist:1 notice:2 sign:1 arising:1 data3:1 d3:3 pj:1 fraction:1 named:1 tzhang:1 almost:1 extends:1 reasonable:2 p3:2 appendix:2 dy:4 bound:18 quadratic:11 encountered:1 x2:3 calling:1 argument:1 min:2 span:6 structured:4 developing:1 according:3 across:2 pan:1 invariant:2 ordinal:2 know:1 end:3 available:3 apply:3 enforce:1 appearing:1 original:1 denotes:1 remaining:1 include:4 top:1 log2:1 newton:2 cally:1 giving:1 ting:1 objective:7 question:1 quantity:1 occurs:1 diagonal:5 gradient:14 link:1 separate:1 athena:1 argue:1 collected:1 valuation:1 minimizing:1 reund:1 esults:1 gk:5 negative:2 rise:1 clickthrough:1 unknown:2 upper:16 observation:2 sm:2 descent:1 bk:1 pair:16 tentative:1 engine:6 quadratically:1 learned:1 established:2 beyond:1 below:1 summarize:1 including:2 max:9 ranked:5 natural:3 representing:1 improve:1 technology:1 imply:1 ne:1 carried:1 hm:7 extract:2 xq:2 text:2 byte:1 interdependent:1 l2:9 relative:3 qbrank:25 loss:14 interesting:1 validation:3 h2:3 degree:4 riedman:1 penalized:1 side:1 institute:1 wide:2 taking:1 absolute:4 tolerance:1 boundary:1 default:1 xn:1 feedback:1 cumulative:1 preferred:5 dealing:1 global:3 hongyuan:1 anchor:2 conceptual:1 xi:27 search:17 sk:6 table:2 learn:1 robust:1 ca:1 expansion:2 hmi:3 kgj:1 complex:7 diag:1 did:1 main:3 bounding:1 whole:1 s2:2 edition:1 contradicting:1 fair:1 x1:4 cient:1 georgia:1 tong:1 precision:6 position:1 comprises:3 lie:1 tied:2 weighting:1 learns:1 rk:5 theorem:2 bad:3 list:2 svm:1 adding:1 yer:1 margin:3 chen:1 easier:1 simply:2 explore:1 forming:1 prevents:1 expressed:2 corresponds:1 extracted:4 acm:3 conditional:2 goal:1 identity:1 sorted:1 formulated:1 replace:1 except:1 classi:2 total:2 experimental:2 e:1 indicating:1 college:1 relevance:8 evaluate:2 d1:1 |
2,543 | 3,306 | Regret Minimization in Games with Incomplete
Information
Martin Zinkevich
maz@cs.ualberta.ca
Michael Johanson
johanson@cs.ualberta.ca
Carmelo Piccione
Computing Science Department
University of Alberta
Edmonton, AB Canada T6G2E8
carm@cs.ualberta.ca
Michael Bowling
Computing Science Department
University of Alberta
Edmonton, AB Canada T6G2E8
bowling@cs.ualberta.ca
Abstract
Extensive games are a powerful model of multiagent decision-making scenarios
with incomplete information. Finding a Nash equilibrium for very large instances
of these games has received a great deal of recent attention. In this paper, we
describe a new technique for solving large games based on regret minimization.
In particular, we introduce the notion of counterfactual regret, which exploits the
degree of incomplete information in an extensive game. We show how minimizing
counterfactual regret minimizes overall regret, and therefore in self-play can be
used to compute a Nash equilibrium. We demonstrate this technique in the domain
of poker, showing we can solve abstractions of limit Texas Hold?em with as many
as 1012 states, two orders of magnitude larger than previous methods.
1
Introduction
Extensive games are a natural model for sequential decision-making in the presence of other
decision-makers, particularly in situations of imperfect information, where the decision-makers have
differing information about the state of the game. As with other models (e.g., MDPs and POMDPs),
its usefulness depends on the ability of solution techniques to scale well in the size of the model. Solution techniques for very large extensive games have received considerable attention recently, with
poker becoming a common measuring stick for performance. Poker games can be modeled very
naturally as an extensive game, with even small variants, such as two-player, limit Texas Hold?em,
being impractically large with just under 1018 game states.
State of the art in solving extensive games has traditionally made use of linear programming using a
realization plan representation [1]. The representation is linear in the number of game states, rather
than exponential, but considerable additional technology is still needed to handle games the size of
poker. Abstraction, both hand-chosen [2] and automated [3], is commonly employed to reduce the
game from 1018 to a tractable number of game states (e.g., 107 ), while still producing strong poker
programs. In addition, dividing the game into multiple subgames each solved independently or in
real-time has also been explored [2, 4]. Solving larger abstractions yields better approximate Nash
equilibria in the original game, making techniques for solving larger games the focus of research
in this area. Recent iterative techniques have been proposed as an alternative to the traditional
linear programming methods. These techniques have been shown capable of finding approximate
solutions to abstractions with as many as 1010 game states [5, 6, 7], resulting in the first significant
improvement in poker programs in the past four years.
1
In this paper we describe a new technique for finding approximate solutions to large extensive games.
The technique is based on regret minimization, using a new concept called counterfactual regret. We
show that minimizing counterfactual regret minimizes overall regret, and therefore can be used to
compute a Nash equilibrium. We then present an algorithm for minimizing counterfactual regret
in poker. We use the algorithm to solve poker abstractions with as many as 1012 game states, two
orders of magnitude larger than previous methods. We also show that this translates directly into
an improvement in the strength of the resulting poker playing programs. We begin with a formal
description of extensive games followed by an overview of regret minimization and its connections
to Nash equilibria.
2
Extensive Games, Nash Equilibria, and Regret
Extensive games provide a general yet compact model of multiagent interaction, which explicitly
represents the often sequential nature of these interactions. Before presenting the formal definition,
we first give some intuitions. The core of an extensive game is a game tree just as in perfect information games (e.g., Chess or Go). Each non-terminal game state has an associated player choosing
actions and every terminal state has associated payoffs for each of the players. The key difference
is the additional constraint of information sets, which are sets of game states that the controlling
player cannot distinguish and so must choose actions for all such states with the same distribution.
In poker, for example, the first player to act does not know which cards the other players were dealt,
and so all game states immediately following the deal where the first player holds the same cards
would be in the same information set. We now describe the formal model as well as notation that
will be useful later.
Definition 1 [8, p. 200] a finite extensive game with imperfect information has the following components:
? A finite set N of players.
? A finite set H of sequences, the possible histories of actions, such that the empty sequence
is in H and every prefix of a sequence in H is also in H. Z ? H are the terminal histories
(those which are not a prefix of any other sequences). A(h) = {a : (h, a) ? H} are the
actions available after a nonterminal history h ? H,
? A function P that assigns to each nonterminal history (each member of H\Z) a member of
N ? {c}. P is the player function. P (h) is the player who takes an action after the history
h. If P (h) = c then chance determines the action taken after history h.
? A function fc that associates with every history h for which P (h) = c a probability measure fc (?|h) on A(h) (fc (a|h) is the probability that a occurs given h), where each such
probability measure is independent of every other such measure.
? For each player i ? N a partition Ii of {h ? H : P (h) = i} with the property that
A(h) = A(h0 ) whenever h and h0 are in the same member of the partition. For Ii ? Ii
we denote by A(Ii ) the set A(h) and by P (Ii ) the player P (h) for any h ? Ii . Ii is the
information partition of player i; a set Ii ? Ii is an information set of player i.
? For each player i ? N a utility function ui from the terminal states Z to the reals R. If
N = {1, 2} and u1 = ?u2 , it is a zero-sum extensive game. Define ?u,i = maxz ui (z) ?
minz ui (z) to be the range of utilities to player i.
Note that the partitions of information as described can result in some odd and unrealistic situations
where a player is forced to forget her own past decisions. If all players can recall their previous
actions and the corresponding information sets, the game is said to be one of perfect recall. This
work will focus on finite, zero-sum extensive games with perfect recall.
2.1
Strategies
A strategy of player i ?i in an extensive game is a function that assigns a distribution over A(Ii ) to
each Ii ? Ii , and ?i is the set of strategies for player i. A strategy profile ? consists of a strategy
for each player, ?1 , ?2 , . . ., with ??i referring to all the strategies in ? except ?i .
2
Let ? ? (h) be the probability of history h occurring if players choose actions according to ?. We can
decompose ? ? = ?i?N ?{c} ?i? (h) into each player?s contribution to this probability. Hence, ?i? (h)
is the probability that if player i plays according to ? then for all histories h0 that are a proper prefix
?
(h) be the P
product of all
of h with P (h0 ) = i, player i takes the corresponding action in h. Let ??i
players? contribution (including chance) except player i. For I ? H, define ? ? (I) = h?I ? ? (h),
?
as the probability of reaching a particular information set given ?, with ?i? (I) and ??i
(I) defined
similarly.
The overall value
Pto player i of a strategy profile is then the expected payoff of the resulting terminal
node, ui (?) = h?Z ui (h)? ? (h).
2.2
Nash Equilibrium
The traditional solution concept of a two-player extensive game is that of a Nash equilibrium. A
Nash equilibrium is a strategy profile ? where
u1 (?) ? max
u1 (?10 , ?2 )
0
u2 (?) ? max
u2 (?1 , ?20 ).
0
?1 ??1
?2 ??2
(1)
An approximation of a Nash equilibrium or -Nash equilibrium is a strategy profile ? where
u2 (?1 , ?20 ).
u2 (?) + ? max
0
u1 (?10 , ?2 )
u1 (?) + ? max
0
?2 ??2
?1 ??1
2.3
(2)
Regret Minimization
Regret is an online learning concept that has triggered a family of powerful learning algorithms. To
define this concept, first consider repeatedly playing an extensive game. Let ?it be the strategy used
by player i on round t. The average overall regret of player i at time T is:
RiT =
T
X
1
t
ui (?i? , ??i
) ? ui (? t )
max
? ??
?
T i i t=1
(3)
Moreover, define ?
?it to be the average strategy for player i from time 1 to T . In particular, for each
information set I ? Ii , for each a ? A(I), define:
?
?it (I)(a)
PT
=
t
?i? (I)? t (I)(a)
.
PT
?t
t=1 ?i (I)
t=1
(4)
There is a well-known connection between regret and the Nash equilibrium solution concept.
Theorem 2 In a zero-sum game at time T , if both player?s average overall regret is less than , then
?
? T is a 2 equilibrium.
An algorithm for selecting ?it for player i is regret minimizing if player i?s average overall regret
t
(regardless of the sequence ??i
) goes to zero as t goes to infinity. As a result, regret minimizing
algorithms in self-play can be used as a technique for computing an approximate Nash equilibrium.
Moreover, an algorithm?s bounds on the average overall regret bounds the rate of convergence of the
approximation.
Traditionally, regret minimization has focused on bandit problems more akin to normal-form games.
Although it is conceptually possible to convert any finite extensive game to an equivalent normalform game, the exponential increase in the size of the representation makes the use of regret algorithms on the resulting game impractical. Recently, Gordon has introduced the Lagrangian Hedging
(LH) family of algorithms, which can be used to minimize regret in extensive games by working
with the realization plan representation [5]. We also propose a regret minimization procedure that
exploits the compactness of the extensive game. However, our technique doesn?t require the costly
quadratic programming optimization needed with LH allowing it to scale more easily, while achieving even tighter regret bounds.
3
3
Counterfactual Regret
The fundamental idea of our approach is to decompose overall regret into a set of additive regret
terms, which can be minimized independently. In particular, we introduce a new regret concept
for extensive games called counterfactual regret, which is defined on an individual information set.
We show that overall regret is bounded by the sum of counterfactual regret, and also show how
counterfactual regret can be minimized at each information set independently.
We begin by considering one particular information set I ? Ii and player i?s choices made in that
information set. Define ui (?, h) to be the expected utility given that the history h is reached and
then all players play using strategy ?. Define counterfactual utility ui (?, I) to be the expected
utility given that information set I is reached and all players play using strategy ? except that player
i plays to reach I, formally if ? ? (h, h0 ) is the probability of going from history h to history h0 , then:
P
?
?
0
0
h?I,h0 ?Z ??i (h)? (h, h )ui (h )
ui (?, I) =
(5)
? (I)
??i
Finally, for all a ? A(I), define ?|I?a to be a strategy profile identical to ? except that player i
always chooses action a when in information set I. The immediate counterfactual regret is:
T
Ri,imm
(I) =
T
X
1
?t
max
??i
(I) ui (? t |I?a , I) ? ui (? t , I)
T a?A(I) t=1
(6)
Intuitively, this is the player?s regret in its decisions at information set I in terms of counterfactual
utility, with an additional weighting term for the counterfactual probability that I would be reached
on that round if the player had tried to do so. As we will often be most concerned about regret when it
T,+
T
is positive, let Ri,imm
(I) = max(Ri,imm
(I), 0) be the positive portion of immediate counterfactual
regret.
We can now state our first key result.
P
T,+
Theorem 3 RiT ? I?Ii Ri,imm
(I)
The proof is in the full version. Since minimizing immediate counterfactual regret minimizes the
overall regret, it enables us to find an approximate Nash equilibrium if we can only minimize the
immediate counterfactual regret.
The key feature of immediate counterfactual regret is that it can be minimized by controlling only
?i (I). To this end, we can use Blackwell?s algorithm for approachability to minimize this regret
independently on each information set. In particular, we maintain for all I ? Ii , for all a ? A(I):
RiT (I, a)
T
1 X ?t
=
? (I) ui (? t |I?a , I) ? ui (? t , I)
T t=1 ?i
Define RiT,+ (I, a) = max(RiT (I, a), 0), then the strategy for time T + 1 is:
?
P
? P RiT ,+ (I,a)
if a?A(I) RiT,+ (I, a) > 0
T ,+
T +1
(I,a)
a?A(I) Ri
?i (I)(a) =
? 1
otherwise.
|A(I)|
(7)
(8)
In other words, actions are selected in proportion to the amount of positive counterfactual regret
for not playing that action. If no actions have any positive counterfactual regret, then the action is
selected randomly. This leads us to our second key result.
p
?
T
Theorem 4 If player i selects actions according to Equation 8 then Ri,imm
(I) ? ?u,i |Ai |/ T
p
?
and consequently RiT ? ?u,i |Ii | |Ai |/ T where |Ai | = maxh:P (h)=i |A(h)|.
The proof is in the full version. This result establishes that the strategy in Equation 8 can be used
in self-play to compute a Nash equilibrium. In addition, the bound on the average overall regret is
linear in the number of information sets. These are similar bounds to what?s achievable by Gordon?s
Lagrangian Hedging algorithms. Meanwhile, minimizing counterfactual regret does not require
a costly quadratic program projection on each iteration. In the next section we demonstrate our
technique in the domain of poker.
4
4
Application To Poker
We now describe how we use counterfactual regret minimization to compute a near equilibrium
solution in the domain of poker. The poker variant we focus on is heads-up limit Texas Hold?em,
as it is used in the AAAI Computer Poker Competition [9]. The game consists of two players
(zero-sum), four rounds of cards being dealt, and four rounds of betting, and has just under 1018
game states [2]. As with all previous work on this domain, we will first abstract the game and
find an equilibrium of the abstracted game. In the terminology of extensive games, we will merge
information sets; in the terminology of poker, we will bucket card sequences. The quality of the
resulting near equilibrium solution depends on the coarseness of the abstraction. In general, the less
abstraction used, the higher the quality of the resulting strategy. Hence, the ability to solve a larger
game means less abstraction is required, translating into a stronger poker playing program.
4.1
Abstraction
The goal of abstraction is to reduce the number of information sets for each player to a tractable
size such that the abstract game can be solved. Early poker abstractions [2, 4] involved limiting
the possible sequences of bets, e.g., only allowing three bets per round, or replacing all first-round
decisions with a fixed policy. More recently, abstractions involving full four round games with the
full four bets per round have proven to be a significant improvement [7, 6]. We also will keep the
full game?s betting structure and focus abstraction on the dealt cards.
Our abstraction groups together observed card sequences based on a metric called hand strength
squared. Hand strength is the expected probability of winning1 given only the cards a player has
seen. This was used a great deal in previous work on abstraction [2, 4]. Hand strength squared
is the expected square of the hand strength after the last card is revealed, given only the cards a
player has seen. Intuitively, hand strength squared is similar to hand strength but gives a bonus to
card sequences whose eventual hand strength has higher variance. Higher variance is preferred as it
means the player eventually will be more certain about their ultimate chances of winning prior to a
showdown. More importantly, we will show in Section 5 that this metric for abstraction results in
stronger poker strategies.
The final abstraction is generated by partitioning card sequences based on the hand strength squared
metric. First, all round-one card sequences (i.e., all private card holdings) are partitioned into ten
equally sized buckets based upon the metric. Then, all round-two card sequences that shared a
round-one bucket are partitioned into ten equally sized buckets based on the metric now applied at
round two. Thus, a partition of card sequences in round two is a pair of numbers: its bucket in
the previous round and its bucket in the current round given its bucket in the previous round. This
is repeated after reach round, continuing to partition card sequences that agreed on the previous
rounds? buckets into ten equally sized buckets based on the metric applied in that round. Thus, card
sequences are partitioned into bucket sequences: a bucket from {1, . . . 10} for each round. The
resulting abstract game has approximately 1.65 ? 1012 game states, and 5.73 ? 107 information
sets. In the full game of poker, there are approximately 9.17 ? 1017 game states and 3.19 ? 1014
information sets. So although this represents a significant abstraction on the original game it is two
orders of magnitude larger than previously solved abstractions.
4.2
Minimizing Counterfactual Regret
Now that we have specified an abstraction, we can use counterfactual regret minimization to compute an approximate equilibrium for this game. The basic procedure involves having two players
repeatedly play the game using the counterfactual regret minimizing strategy from Equation 8. After T repetitions of the game, or simply iterations, we return (?
?1T , ?
?2T ) as the resulting approximate
t
equilibrium. Repeated play requires storing Ri (I, a) for every information set I and action a, and
updating it after each iteration.2
1
Where a tie is considered ?half a win?
The bound from Theorem 4 for the basic procedure can actually be made significantly tighter in the specific
case of poker. In the full version, we show that the bound for poker is actually independent of the size of the
card abstraction.
2
5
For our experiments, we actually use a variation of this basic procedure, which exploits the fact
that our abstraction has a small number of information sets relative to the number of game states.
Although each information set is crucial, many consist of a hundred or more individual histories.
This fact suggests it may be possible to get a good idea of the correct behavior for an information
set by only sampling a fraction of the associated game states. In particular, for each iteration, we
sample deterministic actions for the chance player. Thus, ?ct is set to be a deterministic strategy, but
chosen according to the distribution specified by fc . For our abstraction this amounts to choosing
a joint bucket sequence for the two players. Once the joint bucket sequence is specified, there are
?t
only 18,496 reachable states and 6,378 reachable information sets. Since ??i
(I) is zero for all other
information sets, no updates need to be made for these information sets.3
This sampling variant allows approximately 750 iterations of the algorithm to be completed in a
single second on a single core of a 2.4Ghz Dual Core AMD Opteron 280 processor. In addition, a
straightforward parallelization is possible and was used when noted in the experiments. Since betting
is public information, the flop-onward information sets for a particular preflop betting sequence can
be computed independently. With four processors we were able to complete approximately 1700
iterations in one second. The complete algorithmic details with pseudocode can be found in the full
version.
5
Experimental Results
Before discussing the results, it is useful to consider how one evaluates the strength of a near equilibrium poker strategy. One natural method is to measure the strategy?s exploitability, or its performance against its worst-case opponent. In a symmetric, zero-sum game like heads-up poker4 , a
perfect equilibrium has zero exploitability, while an -Nash equilibrium has exploitability . A convenient measure of exploitability is millibets-per-hand (mb/h), where a millibet is one thousandth
of a small-bet, the fixed magnitude of bets used in the first two rounds of betting. To provide some
intuition for these numbers, a player that always folds will lose 750 mb/h while a player that is 10
mb/h stronger than another would require over one million hands to be 95% certain to have won
overall.
In general, it is intractable to compute a strategy?s exploitability within the full game. For strategies
in a reasonably sized abstraction it is possible to compute their exploitability within their own abstract game. Such a measure is a useful evaluation of the equilibrium computation technique that
was used to generate the strategy. However, it does not imply the technique cannot be exploited by
a strategy outside of its abstraction. It is therefore common to compare the performance of the strategy in the full game against a battery of known strong poker playing programs. Although positive
expected value against an opponent is not transitive, winning against a large and diverse range of
opponents suggests a strong program.
We used the sampled counterfactual regret minimization procedure to find an approximate equilibrium for our abstract game as described in the previous section. The algorithm was run for 2 billion
iterations (T = 2 ? 109 ), or less than 14 days of computation when parallelized across four CPUs.
The resulting strategy?s exploitability within its own abstract game is 2.2 mb/h. After only 200 million iterations, or less than 2 days of computation, the strategy was already exploitable by less than
13 mb/h. Notice that the algorithm visits only 18,496 game states per iteration. After 200 million
iterations each game state has been visited less than 2.5 times on average, yet the algorithm has
already computed a relatively accurate solution.
5.1
Scaling the Abstraction
In addition to finding an approximate equilibrium for our large abstraction, we also found approximate equilibria for a number of smaller abstractions. These abstractions used fewer buckets per
round to partition the card sequences. In addition to ten buckets, we also solved eight, six, and five
3
A regret analysis of this variant in poker is included in the full version. We show that the quadratic decrease
in the cost per iteration only causes in a linear increase in the required number of iterations. The experimental
results in the next section coincides with this analysis.
4
A single hand of poker is not a symmetric game as the order of betting is strategically significant. However
a pair of hands where the betting order is reversed is symmetric.
6
25
CFR5
CFR8
CFR10
Size
Iterations Time
Exp
(?109 )
(?106 )
(h)
(mb/h)
5
6.45
100
33
3.4
6
27.7
200
75
3.1
8
276
750
261
2.7
10
1646
2000
326?
2.2
?: parallel implementation with 4 CPUs
Exploitability (mb/h)
20
Abs
15
10
5
(a)
0
0
2
4
6
8
10
12
14
16
18
Iterations in thousands, divided by the number of information sets
(b)
Figure 1: (a) Number of game states, number of iterations, computation time, and exploitability (in
its own abstract game) of the resulting strategy for different sized abstractions. (b) Convergence
rates for three different sized abstractions. The x-axis shows the number of iterations divided by the
number of information sets in the abstraction.
bucket variants. As these abstractions are smaller, they require fewer iterations to compute a similarly accurate equilibrium. For example, the program computed with the five bucket approximation
(CFR5) is about 250 times smaller with just under 1010 game states. After 100 million iterations,
or 33 hours of computation without any parallelization, the final strategy is exploitable by 3.4 mb/h.
This is approximately the same size of game solved by recent state-of-the-art algorithms [6, 7] with
many days of computation.
Figure 1b shows a graph of the convergence rates for the five, eight, and ten partition abstractions.
The y-axis is exploitability while the x-axis is the number of iterations normalized by the number
of information sets in the particular abstraction being plotted. The rates of convergence almost
exactly coincide showing that, in practice, the number of iterations needed is growing linearly with
the number of information sets. Due to the use of sampled bucket sequences, the time per iteration
is nearly independent of the size of the abstraction. This suggests that, in practice, the overall
computational complexity is only linear in the size of the chosen card abstraction.
5.2
Performance in Full Texas Hold?em
We have noted that the ability to solve larger games means less abstraction is necessary, resulting
in an overall stronger poker playing program. We have played our four near equilibrium bots with
various abstraction sizes against each other and two other known strong programs: PsOpti4 and
S2298. PsOpti4 is a variant of the equilibrium strategy described in [2]. It was the stronger half
of Hyperborean, the AAAI 2006 Computer Poker Competition?s winning program. It is available
under the name SparBot in the entertainment program Poker Academy, published by BioTools. We
have calculated strategies that exploit it at 175 mb/h. S2298 is the equilibrium strategy described in
[6]. We have calculated strategies that exploit it at 52.5 mb/h. In terms of the size of the abstract
game PsOpti4 is the smallest consisting of a small number of merged three round games. S2298
restricts the number of bets per round to 3 and uses a five bucket per round card abstraction based
on hand-strength, resulting an abstraction slightly smaller than CFR5.
Table 1 shows a cross table with the results of these matches. Strategies from larger abstractions
consistently, and significantly, outperform their smaller counterparts. The larger abstractions also
consistently exploit weaker bots by a larger margin (e.g., CFR10 wins 19mb/h more from S2298
than CFR5).
Finally, we also played CFR8 against the four bots that competed in the bankroll portion of the 2006
AAAI Computer Poker Competition, which are available on the competition?s benchmark server [9].
The results are shown in Table 2, along with S2298?s previously published performance against the
7
PsOpti4
S2298
CFR5
CFR6
CFR8
CFR10
Max
PsOpti4
0
28
36
40
52
55
55
S2298
-28
0
17
24
30
36
36
CFR5
-36
-17
0
5
13
20
20
CFR6
-40
-24
-5
0
9
14
14
CFR8
-52
-30
-13
-9
0
6
6
CFR10
-55
-36
-20
-14
-6
0
0
Average
-35
-13
2
7
16
22
Table 1: Winnings in mb/h for the row player in full Texas Hold?em. Matches with Opti4 used 10
duplicate matches of 10,000 hands each and are significant to 20 mb/h. Other matches used 10
duplicate matches of 500,000 hands each are are significant to 2 mb/h.
S2298
CFR8
Hyperborean
61
106
BluffBot
113
170
Monash
695
746
Teddy
474
517
Average
336
385
Table 2: Winnings in mb/h for the row player in full Texas Hold?em.
same bots [6]. The program not only beats all of the bots from the competition but does so by a
larger margin than S2298.
6
Conclusion
We introduced a new regret concept for extensive games called counterfactual regret. We showed
that minimizing counterfactual regret minimizes overall regret and presented a general and pokerspecific algorithm for efficiently minimizing counterfactual regret. We demonstrated the technique
in the domain of poker, showing that the technique can compute an approximate equilibrium for
abstractions with as many as 1012 states, two orders of magnitude larger than previous methods. We
also showed that the resulting poker playing program outperforms other strong programs, including
all of the competitors from the bankroll portion of the 2006 AAAI Computer Poker Competition.
References
[1] D. Koller and N. Megiddo. The complexity of two-person zero-sum games in extensive form. Games and
Economic Behavior, pages 528?552, 1992.
[2] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. In International Joint Conference on Artificial
Intelligence, pages 661?668, 2003.
[3] A. Gilpin and T. Sandholm. Finding equilibria in large sequential games of imperfect information. In ACM
Conference on Electronic Commerce, 2006.
[4] A. Gilpin and T. Sandholm. A competitive texas hold?em poker player via automated abstraction and
real-time equilibrium computation. In National Conference on Artificial Intelligence, 2006.
[5] G. Gordon. No-regret algorithms for online convex programs. In Neural Information Processing Systems
19, 2007.
[6] M. Zinkevich, M. Bowling, and N. Burch. A new algorithm for generating strong strategies in massive
zero-sum games. In Proceedings of the Twenty-Seventh Conference on Artificial Intelligence (AAAI), 2007.
To Appear.
[7] A. Gilpin, S. Hoda, J. Pena, and T. Sandholm. Gradient-based algorithms for finding nash equilibria in
extensive form games. In Proceedings of the Eighteenth International Conference on Game Theory, 2007.
[8] M. Osborne and A. Rubenstein. A Course in Game Theory. The MIT Press, Cambridge, Massachusetts,
1994.
[9] M. Zinkevich and M. Littman. The AAAI computer poker competition. Journal of the International
Computer Games Association, 29, 2006. News item.
8
| 3306 |@word private:1 version:5 maz:1 achievable:1 proportion:1 coarseness:1 approachability:1 stronger:5 szafron:1 tried:1 selecting:1 prefix:3 past:2 outperforms:1 current:1 yet:2 must:1 additive:1 partition:8 enables:1 update:1 half:2 selected:2 fewer:2 intelligence:3 item:1 core:3 node:1 five:4 along:1 consists:2 introduce:2 expected:6 behavior:2 growing:1 terminal:5 alberta:2 cpu:2 considering:1 begin:2 notation:1 moreover:2 bounded:1 bonus:1 what:1 pto:1 minimizes:4 differing:1 finding:6 impractical:1 every:5 act:1 tie:1 exactly:1 megiddo:1 stick:1 partitioning:1 appear:1 producing:1 before:2 positive:5 limit:3 monash:1 becoming:1 merge:1 approximately:5 bankroll:2 suggests:3 range:2 commerce:1 carmelo:1 practice:2 regret:59 procedure:5 area:1 significantly:2 projection:1 convenient:1 word:1 get:1 cannot:2 zinkevich:3 maxz:1 equivalent:1 eighteenth:1 lagrangian:2 deterministic:2 go:3 attention:2 regardless:1 independently:5 straightforward:1 focused:1 convex:1 immediately:1 assigns:2 importantly:1 handle:1 notion:1 traditionally:2 variation:1 limiting:1 controlling:2 play:9 pt:2 ualberta:4 massive:1 programming:3 us:1 associate:1 particularly:1 updating:1 observed:1 solved:5 worst:1 thousand:1 news:1 decrease:1 intuition:2 nash:17 ui:15 complexity:2 littman:1 battery:1 solving:4 upon:1 easily:1 joint:3 various:1 forced:1 describe:4 artificial:3 choosing:2 h0:7 outside:1 whose:1 larger:12 solve:4 otherwise:1 ability:3 final:2 online:2 sequence:21 triggered:1 propose:1 interaction:2 product:1 mb:15 realization:2 academy:1 description:1 competition:7 billion:1 convergence:4 empty:1 t6g2e8:2 perfect:4 generating:1 cfr5:6 nonterminal:2 odd:1 received:2 strong:6 dividing:1 c:4 involves:1 merged:1 correct:1 opteron:1 translating:1 public:1 require:4 decompose:2 tighter:2 onward:1 hold:8 considered:1 normal:1 exp:1 great:2 equilibrium:36 algorithmic:1 early:1 smallest:1 lose:1 maker:2 visited:1 repetition:1 establishes:1 minimization:10 mit:1 always:2 rather:1 reaching:1 johanson:2 bet:6 focus:4 competed:1 improvement:3 consistently:2 rubenstein:1 abstraction:45 compactness:1 her:1 bandit:1 koller:1 going:1 selects:1 overall:15 dual:1 thousandth:1 plan:2 art:2 once:1 having:1 sampling:2 identical:1 represents:2 hyperborean:2 nearly:1 minimized:3 gordon:3 duplicate:2 strategically:1 randomly:1 national:1 individual:2 consisting:1 maintain:1 ab:3 subgames:1 evaluation:1 accurate:2 capable:1 necessary:1 lh:2 tree:1 incomplete:3 continuing:1 plotted:1 instance:1 measuring:1 cost:1 hundred:1 usefulness:1 seventh:1 chooses:1 referring:1 person:1 fundamental:1 international:3 michael:2 together:1 squared:4 aaai:6 choose:2 s2298:9 return:1 explicitly:1 depends:2 hedging:2 later:1 reached:3 portion:3 competitive:1 parallel:1 contribution:2 minimize:3 square:1 variance:2 who:1 efficiently:1 yield:1 conceptually:1 dealt:3 pomdps:1 published:2 processor:2 history:13 reach:2 whenever:1 definition:2 evaluates:1 against:7 competitor:1 involved:1 naturally:1 associated:3 proof:2 schaeffer:1 sampled:2 schauenberg:1 massachusetts:1 counterfactual:28 recall:3 agreed:1 actually:3 normalform:1 higher:3 day:3 just:4 hand:16 working:1 replacing:1 quality:2 name:1 concept:7 normalized:1 counterpart:1 hence:2 symmetric:3 deal:3 round:25 game:91 bowling:3 self:3 noted:2 won:1 coincides:1 presenting:1 complete:2 demonstrate:2 theoretic:1 recently:3 common:2 pseudocode:1 overview:1 million:4 association:1 pena:1 significant:6 cambridge:1 ai:3 similarly:2 had:1 reachable:2 maxh:1 own:4 recent:3 showed:2 scenario:1 certain:2 server:1 discussing:1 exploited:1 seen:2 holte:1 additional:3 employed:1 parallelized:1 ii:17 multiple:1 full:15 match:5 cross:1 divided:2 equally:3 visit:1 variant:6 involving:1 basic:3 metric:6 iteration:21 addition:5 bluffbot:1 crucial:1 parallelization:2 showdown:1 member:3 near:4 presence:1 revealed:1 concerned:1 automated:2 imperfect:3 reduce:2 idea:2 economic:1 billing:1 translates:1 texas:7 six:1 utility:6 ultimate:1 akin:1 cause:1 action:17 repeatedly:2 useful:3 amount:2 ten:5 generate:1 outperform:1 restricts:1 notice:1 bot:5 per:9 diverse:1 group:1 key:4 four:9 terminology:2 achieving:1 graph:1 fraction:1 year:1 sum:8 convert:1 run:1 powerful:2 preflop:1 family:2 almost:1 electronic:1 decision:7 scaling:1 bound:7 ct:1 followed:1 distinguish:1 played:2 fold:1 quadratic:3 strength:11 constraint:1 infinity:1 burch:2 ri:7 u1:5 martin:1 betting:7 relatively:1 department:2 according:4 across:1 smaller:5 em:7 slightly:1 sandholm:3 partitioned:3 making:3 chess:1 intuitively:2 bucket:19 taken:1 equation:3 previously:2 eventually:1 needed:3 know:1 tractable:2 end:1 available:3 opponent:3 eight:2 alternative:1 original:2 entertainment:1 completed:1 exploit:6 approximating:1 already:2 occurs:1 strategy:38 costly:2 traditional:2 poker:36 said:1 gradient:1 win:2 reversed:1 card:21 amd:1 modeled:1 minimizing:11 holding:1 implementation:1 proper:1 policy:1 twenty:1 allowing:2 benchmark:1 finite:5 teddy:1 beat:1 immediate:5 situation:2 payoff:2 flop:1 head:2 canada:2 introduced:2 pair:2 blackwell:1 required:2 extensive:25 connection:2 specified:3 hour:1 able:1 program:16 including:2 max:9 unrealistic:1 natural:2 technology:1 mdps:1 imply:1 axis:3 millibet:1 transitive:1 prior:1 relative:1 multiagent:2 piccione:1 proven:1 degree:1 playing:7 storing:1 row:2 course:1 last:1 formal:3 weaker:1 ghz:1 calculated:2 doesn:1 made:4 commonly:1 coincide:1 approximate:11 compact:1 preferred:1 keep:1 abstracted:1 imm:5 davidson:1 iterative:1 table:5 nature:1 exploitability:10 reasonably:1 ca:4 hoda:1 meanwhile:1 domain:5 linearly:1 millibets:1 profile:5 osborne:1 repeated:2 exploitable:2 edmonton:2 exponential:2 winning:5 minz:1 weighting:1 theorem:4 specific:1 showing:3 explored:1 consist:1 intractable:1 sequential:3 demonstrated:1 magnitude:5 occurring:1 margin:2 forget:1 fc:4 simply:1 u2:5 psopti4:5 chance:4 determines:1 acm:1 goal:1 sized:6 consequently:1 eventual:1 shared:1 considerable:2 included:1 except:4 impractically:1 called:4 experimental:2 player:56 rit:8 formally:1 gilpin:3 |
2,544 | 3,307 | Colored Maximum Variance Unfolding
Le Song? , Alex Smola? , Karsten Borgwardt? and Arthur Gretton?
?
National ICT Australia, Canberra, Australia
?
University of Cambridge, Cambridge, United Kingdom
?
MPI for Biological Cybernetics, T?ubingen, Germany
{le.song,alex.smola}@nicta.com.au
kmb51@eng.cam.ac.uk,arthur.gretton@tuebingen.mpg.de
Abstract
Maximum variance unfolding (MVU) is an effective heuristic for dimensionality
reduction. It produces a low-dimensional representation of the data by maximizing the variance of their embeddings while preserving the local distances of the
original data. We show that MVU also optimizes a statistical dependence measure
which aims to retain the identity of individual observations under the distancepreserving constraints. This general view allows us to design ?colored? variants
of MVU, which produce low-dimensional representations for a given task, e.g.
subject to class labels or other side information.
1
Introduction
In recent years maximum variance unfolding (MVU), introduced by Saul et al. [1], has gained popularity as a method for dimensionality reduction. This method is based on a simple heuristic: maximizing the overall variance of the embedding while preserving the local distances between neighboring observations. Sun et al. [2] show that there is a dual connection between MVU and the goal
of finding a fast mixing Markov chain. This connection is intriguing. However, it offers limited
insight as to why MVU can be used for data representation.
This paper provides a statistical interpretation of MVU. We show that the algorithm attempts to
extract features from the data which simultaneously preserve the identity of individual observations
and their local distance structure. Our reasoning relies on a dependence measure between sets of
observations, the Hilbert-Schmidt Independence Criterion (HSIC) [3].
Relaxing the requirement of retaining maximal information about individual observations, we are
able to obtain ?colored? MVU. Unlike traditional MVU which takes only one source of information into account, ?colored? MVU allows us to integrate two sources of information into a single
embedding. That is, we are able to find an embedding that leverages between two goals:
? preserve the local distance structure according to the first source of information (the data);
? and maximally align with the second sources of information (side information).
Note that not all features inherent in the data are interesting for an ulterior objective. For instance,
if we want to retain a reduced representation of the data for later classification, then only those
discriminative features will be relevant. ?Colored? MVU achieves the goal of elucidating primarily
relevant features by aligning the embedding to the objective provided in the side information. Some
examples illustrate this situation in more details:
? Given a-bag-of-pixels representation of images (the data), such as USPS digits, find an
embedding which reflects the categories of the images (side information).
? Given a vector space representation of texts on the web (the data), such as newsgroups, find
an embedding which reflects a hierarchy of the topics (side information).
1
? Given a TF/IDF representation of documents (the data), such as NIPS papers, find an embedding which reflects co-authorship relations between the documents (side information).
There is a strong motivation for not simply merging the two sources of information into a single
distance metric: Firstly, the data and the side information may be heterogenous. It is unclear how
to combine them into a single distance metric; Secondly, the side information may appear in the
form of similarity rather than distance. For instance, co-authorship relations is a similarity between
documents (if two papers share more authors, they tends to be more similar), but it does not induce
a distance between the documents (if two papers share no authors, we cannot assert they are far
apart). Thirdly, at test time (i.e. when inserting a new observation into an existing embedding) only
one source of information might be available, i.e. the side information is missing.
2
Maximum Variance Unfolding
We begin by giving a brief overview of MVU and its projection variants, as proposed in [1]. Given
a set of m observations Z = {z1 , . . . , zm } ? Z and a distance metric d : Z ? Z ? [0, ?) find an
inner product matrix (kernel matrix) K ? Rm?m with K 0 such that
1. The distances are preserved, i.e. Kii + Kjj ? 2Kij = d2ij for all (i, j) pairs which are
sufficiently close to each other, such as the n nearest neighbors of each observation. We
denote this set by N . We will also use N to denote the graph formed by having these (i, j)
pairs as edges.
2. The embedded data is centered, i.e. K1 = 0 (where 1 = (1, . . . , 1)> and 0 = (0, . . . , 0)> ).
3. The trace of K is maximized (the maximum variance unfolding part).
Several variants of this algorithm, including a large scale variant [4] have been proposed. By and
large the optimization problem looks as follows:
maximize tr K subject to K1 = 0 and Kii + Kjj ? 2Kij = d2ij for all (i, j) ? N .
K0
(1)
Numerous variants of (1) exist, e.g. where the distances are only allow to shrink, where slack variables are added to the objective function to allow approximate distance preserving, or where one uses
low-rank expansions of K to cope with the computational complexity of semidefinite programming.
A major drawback with MVU is that its results necessarily come as somewhat of a surprise. That is,
it is never clear before invoking MVU what specific interesting results it might produce. While in
hindsight it is easy to find an insightful interpretation of the outcome, it is not a-priori clear which
aspect of the data the representation might emphasize. A second drawback is that while in general
generating brilliant results, its statistical origins are somewhat more obscure. We aim to address
these problems by means of the Hilbert-Schmidt Independence Criterion.
3
Hilbert-Schmidt Independence Criterion
Let sets of observations X and Y be drawn jointly from some distribution Prxy . The HilbertSchmidt Independence Criterion (HSIC) [3] measures the dependence between two random variables, x and y, by computing the square of the norm of the cross-covariance operator over the
domain X ? Y in Hilbert Space. It can be shown, provided the Hilbert Space is universal, that this
norm vanishes if and only if x and y are independent. A large value suggests strong dependence
with respect to the choice of kernels.
Let F and G be the reproducing kernel Hilbert Spaces (RKHS) on X and Y with associated kernels
k : X ? X ? R and l : Y ? Y ? R respectively. The cross-covariance operator Cxy : G 7? F is
defined as [5]
Cxy = Exy [(k(x, ?) ? ?x )(l(y, ?) ? ?y )] ,
(2)
where ?x = E[k(x, ?)] and ?y = E[l(y, ?)]. HSIC is then defined as the square of the Hilbert2
Schmidt norm of Cxy , that is HSIC(F, G, Prxy ) := kCxy kHS . In term of kernels HSIC is
Exx0 yy0 [k(x, x0 )l(y, y 0 )] + Exx0 [k(x, x0 )]Eyy0 [l(y, y 0 )] ? 2Exy [Ex0 [k(x, x0 )]Ey0 [l(y, y 0 )]].
2
(3)
Given the samples (X, Y ) = {(x1 , y1 ), . . . , (xm , ym )} of size m drawn from the joint distribution,
Prxy , an empirical estimate of HSIC is [3]
HSIC(F, G, Z) = (m ? 1)?2 tr HKHL,
(4)
where K, L ? Rm?m are the kernel matrices for the data and the labels respectively, and Hij =
?ij ? m?1 centers the data and the labels in the feature space. (For convenience, we will drop the
normalization and use tr HKHL as HSIC.)
HSIC has been used to measure independence between random variables [3], to select features or to
cluster data (see the Appendix for further details). Here we use it in a different way:
We try to construct a kernel matrix K for the dimension-reduced data X which
preserves the local distance structure of the original data Z, such that X is maximally dependent on the side information Y as seen from its kernel matrix L.
HSIC has several advantages as a dependence criterion. First, it satisfies concentration of measure
conditions [3]: for random draws of observation from Prxy , HSIC provides values which are very
similar. This is desirable, as we want our metric embedding to be robust to small changes. Second,
HSIC is easy to compute, since only the kernel matrices are required and no density estimation is
needed. The freedom of choosing a kernel for L allows us to incorporate prior knowledge into the
dependence estimation process. The consequence is that we are able to incorporate various side
information by simply choosing an appropriate kernel for Y .
4
Colored Maximum Variance Unfolding
We state the algorithmic modification first and subsequently we explain why this is reasonable: the
key idea is to replace tr K in (1) by tr KL, where L is the covariance matrix of the domain (side
information) with respect to which we would like to extract features. For instance, in the case of
NIPS papers which happen to have author information, L would be the kernel matrix arising from
coauthorship and d(z, z 0 ) would be the Euclidean distance between the vector space representations
of the documents. Key to our reasoning is the following lemma:
Lemma 1 Denote by L a positive semidefinite matrix in Rm?m and let H ? Rm?m be defined as
Hij = ?ij ? m?1 . Then the following two optimization problems are equivalent:
maximize tr HKHL subject to K 0 and constraints on Kii + Kjj ? 2Kij .
(5a)
maximize tr KL subject to K 0 and constraints on Kii + Kjj ? 2Kij and K1 = 0.
(5b)
K
K
Any solution of (5b) solves (5a) and any solution of (5a) solves (5b) after centering K ? HKH.
Proof Denote by Ka and Kb the solutions of (5a) and (5b) respectively. Kb is feasible for (5a) and
tr Kb L = tr HKb HL. This implies that tr HKa HL ? tr HKb HL. Vice versa HKa H is feasible
for (5b). Moreover tr HKa HL ? tr Kb L by requirement on the optimality of Kb . Combining both
inequalities shows that tr HKa HL = tr Kb L, hence both solutions are equivalent.
This means that the centering imposed in MVU via constraints is equivalent to the centering in
HSIC by means of the dependence measure tr HKHL itself. In other words, MVU equivalently
maximizes tr HKHI, i.e. the dependence between K and the identity matrix I which corresponds
to retain maximal diversity between observations via Lij = ?ij . This suggests the following colored
version of MVU:
maximize tr HKHL subject to K 0 and Kii + Kjj ? 2Kij = d2ij for all (i, j) ? N .
K
(6)
Using (6) we see that we are now extracting a Euclidean embedding which maximally depends on
the coloring matrix L (for the side information) while preserving local distance structure. A second
advantage of (6) is that whenever we restrict K further, e.g. by only allowing for K be part of a
linear subspace formed by the principal vectors in some space, (6) remains feasible, whereas the
(constrained) MVU formulation may become infeasible (i.e. K1 = 0 may not be satisfied).
3
5
Dual Problem
To gain further insight into the structure of the solution of (6) we derive its dual problem. Our
approach uses the results from [2]. First we define matrices Eij ? Rm?m for each edge (i, j) ? N ,
ij
ij
ij
such that it has only four nonzero entries Eij
ii = Ejj = 1 and Eij = Eji = ?1. Then the distance
preserving constraint can be written as tr KEij = d2ij . Thus we have the following Lagrangian:
X
L = tr KHLH + tr KZ ?
wij (tr KEij ? d2ij )
(i,j)?N
= tr K(HLH + Z ?
X
wij Eij ) +
(i,j)?N
X
wij d2ij where Z 0 and wij ? 0.
(7)
(i,j)?N
P
Setting the derivative of L with respect to K to zero, yields HLH + Z ? (i,j)?N wij Eij = 0.
Plugging this condition into (7) gives us the dual problem.
X
X
wij d2ij subject to G(w) HLH where G(w) =
wij Eij .
minimize
(8)
wij
(i,j)?N
(i,j)?N
Note that G(w) amounts to the Graph Laplacian of a weighted graph with adjacency matrix given
by w. The dual constraint G(w) HLH effectively requires that the eigen-spectrum of the graph
Laplacian is bounded from below by that of HLH.
We are interested in the properties of the solution K of the primal problem, in particular the number of nonzero eigenvalues. Recall that at optimality the Karush-Kuhn-Tucker conditions imply
tr KZ = 0, i.e. the row space of K lies in the null space of Z. Thus the rank of K is upper bounded
by the dimension of the null space of Z.
Recall that Z = G(w) ? HLH 0, and by design G(w) 0 since it is the graph Laplacian
of a weighted graph with edge weights wij . If G(w) corresponds to a connected graphs, only one
eigenvalue of G(w) vanishes. Hence, the eigenvectors of Z with zero eigenvalues would correspond
to those lying in the image of HLH. If L arises from a label kernel matrix, e.g. for an n-class
classification problem, then we will only have up to n vanishing eigenvalues in Z. This translates
into only up to n nonvanishing eigenvalues in K.
Contrast this observation with plain MVU. In this case L = I, that is, only one eigenvalue of
HLH vanishes. Hence it is likely that G(w) ? HLH will have many vanishing eigenvalues which
translates into many nonzero eigenvalues of K. This is corroborated by experiments (Section 7).
6
Implementation Details
In practice, instead of requiring the distances to remain unchanged in the embedding we only require
them to be preserved approximately [4]. We do so by penalizing the slackness between the original
distance and the embedding distance, i.e.
X
2
maximize tr HKHL ? ?
Kii + Kjj ? 2Kij ? d2ij subject to K 0
(9)
K
(i,j)?N
Here ? controls the tradeoff between dependence maximization and distance preservation. The
semidefinite program usually has a time complexity up to O(m6 ). This renders direct implementation of the above problem infeasible for anything but toy problems. To reduce the computation, we
approximate K using an orthonormal set of vectors V (of size m ? n) and a smaller positive definite
matrix A (of size n ? n), i.e. K = VAV> . Conveniently we choose the number of dimensions n
to be much smaller than m (n m) such that the resulting semidefinite program with respect to A
becomes tractable (clearly this is an approximation).
To obtain the matrix V we employ a regularization scheme as proposed in [4]. First, we construct a
nearest neighbor graph according to N (we will also refer to this graph and its adjacency matrix as
N ). Then we form V by stacking together the bottom n eigenvectors of the graph Laplacian of the
neighborhood graph via N . The key idea is that neighbors in the original space remain neighbors in
4
the embedding space. As we require them to have similar locations, the bottom eigenvectors of the
graph Laplacian provide a set of good bases for functions smoothly varying across the graph.
Subsequent to the semidefinite program we perform local refinement of the embedding via gradient
descent. Here the objective is reformulated using an m ? n dimensional vector X, i.e. K = XX> .
The initial value X0 is obtained using the n leading eigenvectors of the solution of (9).
7
Experiments
Ultimately the justification for an algorithm is practical applicability. We demonstrate this based on
three datasets: embedding of digits of the USPS database, the Newsgroups 20 dataset containing
Usenet articles in text form, and a collection of NIPS papers from 1987 to 1999.1 We compare
?colored? MVU (also called MUHSIC, maximum unfolding via HSIC) to MVU [1] and PCA, highlighting places where MUHSIC produces more meaningful results by incorporating side information. Further details, such as effects of the adjacency matrices and a comparison to Neighborhood
Component Analysis [6] are relegated to the appendix due to limitations of space.
For images we use the Euclidean distance between pixel values as the base metric. For text documents, we perform four standard preprocessing steps: (i) the words are stemmed using the Porter
stemmer; (ii) we filter out common but meaningless stopwords; (iii) we delete words that appear in
less than 3 documents; (iv) we represent each document as a vector using the usual TF/IDF (term
frequency / inverse document frequency) weighting scheme. As before, the Euclidean distance on
those vectors is used to find the nearest neighbors.
As in [4] we construct the nearest neighbor graph by considering the 1% nearest neighbors of each
point. Subsequently the adjacency matrix of this graph is symmetrized. The regularization parameter
? as given in (9) is set to 1 as a default. Moreover, as in [4] we choose 10 dimensions (n = 10)
to decompose the embedding matrix K. Final visualization is carried out using 2 dimensions. This
makes our results very comparable to previous work.
USPS Digits This dataset consists of images of hand written digits of a resolution of 16?16 pixels.
We normalized the data to the range [?1, 1] and used the test set containing 2007 observations. Since
it is a digit recognition task, we have Y ? [0, . . . , 9]. Y is used to construct the matrix L by applying
the kernel k(y, y 0 ) = ?y,y0 . This kernel further promotes embedding where images from the same
class are grouped tighter. Figure 1 shows the results produced by MUHSIC, MVU and PCA.
The overall properties the embeddings are similar across the three methods (?2? on the left, ?1? on the
right, ?7? on top, and ?8? at the bottom). Arguably MUHSIC produces a clearer visualization. For
instance, images of ?5? are clustered tighter in this case than the other two methods. Furthermore,
MUHSIC also results in much better separation between images from different classes. For instance,
the overlap between ?4? and ?6? produce by MVU and PCA are largely reduced by MUHSIC. Similar
results also hold for ?0? and ?5?.
Figure 1 also shows the eigenspectrum of K produced by different methods. The eigenvalues are
sorted in descending order and normalized by the trace of K. Each patch in the color bar represents
an eigenvalue. We see that MUHSIC results in 3 significant eigenvalues, MVU results in 10, while
PCA produces a grading of many eigenvalues (as can be seen by an almost continuously changing
spectrum in the spectral diagram). This confirms our reasoning of Section 5 that the spectrum
generated by MUHSIC is likely to be considerably sparser than that of MVU.
Newsgroups This dataset consists of Usenet articles collected from 20 different newsgroups. We
use a subset of 2000 documents for our experiments (100 articles from each newsgroup). We remove
the headers from the articles before the preprocessing while keeping the subject line. There is a clear
hierarchy in the newsgroups. For instance, 5 topics are related to computer science, 3 are related
to religion, and 4 are related to recreation. We will use these different topics as side information
and apply a delta kernel k(y, y 0 ) = ?y,y0 on them. Similar to USPS digits we want to preserve the
identity of individual newsgroups. While we did not encode hierarchical information for MVU we
recover a meaningful hierarchy among topics, as can be seen in Figure 2.
1
Preprocessed data are available at http://www.it.usyd.edu.au/?lesong/muhsic datasets.html.
5
Figure 1: Embedding of 2007 USPS digits produced by MUHSIC, MVU and PCA respectively.
Colors of the dots are used to denote digits from different classes. The color bar below each figure
shows the eigenspectrum of the learned kernel matrix K.
Figure 2: Embedding of 2000 newsgroup articles produced by MUHSIC, MVU and PCA respectively. Colors and shapes of the dots are used to denote articles from different newsgroups. The
color bar below each figure shows the eigenspectrum of the learned kernel matrix K.
A distinctive feature of the visualizations is that MUHSIC groups articles from individual topics
more tightly than MVU and PCA. Furthermore, the semantic information is also well preserved by
MUHSIC. For instance, on the left side of the embedding, all computer science topics are placed
adjacent to each other; comp.sys.ibm.pc.hardware and comp.os.ms-windows.misc are adjacent and
well separated from comp.sys.mac.hardware and comp.windows.x and comp.graphics. The latter is
meaningful since Apple computers are more popular in graphics (so are X windows based systems
for scientific visualization). Likewise we see that on the top we find all recreational topics (with
rec.sport.baseball and rec.sport.hockey clearly distinguished from the rec.autos and rec.motorcycles
groups). A similar adjacency between talk.politics.mideast and soc.religion.christian is quite interesting. The layout suggests that the content of talk.politics.guns and of sci.crypt is quite different
from other Usenet discussions.
NIPS Papers We used the 1735 regular NIPS papers from 1987 to 1999. They are scanned from
the proceedings and transformed into text files via OCR. The table of contents (TOC) is also available. We parse the TOC and construct a coauthor network from it. Our goal is to embed the papers
by taking the coauthor information into account. As kernel k(y, y 0 ) we simply use the number of
authors shared by two papers. To illustrate this we highlighted some known researchers. Furthermore, we also annotated some papers to show the semantics revealed by the embedding. Figure 3
shows the results produced by MUHSIC, MVU and PCA.
All three methods correctly represent the two major topics of NIPS papers: artificial systems, i.e.
machine learning (they are positioned on the left side of the visualization) and natural systems,
6
7
Figure 3: Embedding of 1735 NIPS papers produced by MUHSIC, MVU and PCA. Papers by some representative (combinations of) researchers are highlighted
as indicated by the legend. The color bar below each figure shows the eigenspectrum of the learned kernel matrix K. The yellow diamond in the graph denotes the
current paper as submitted to NIPS. This paper is placed in the location of its nearest neighbor; more details are in the appendix.
i.e. computational neuroscience (which lie on the right). This is be confirmed by examining the
highlighted researchers. For instance, the papers by Smola, Sch?olkopf and Jordan are embedded on
the left, whereas the many papers by Sejnowski, Dayan and Bialek can be found on the right.
Unique to the visualization of MUHSIC is that there is a clear grouping of the papers by researchers.
For instance, papers on reinforcement learning (Barto, Singh and Sutton) are on the upper left corner;
papers by Hinton (computational cognitive science) are near the lower left corner; and papers by
Sejnowski and Dayan (computational neuroscientists) are clustered to the right side and adjacent
to each other. Interestingly, papers by Jordan (at that time best-known for his work in graphical
models) are grouped close to the papers on reinforcement learning. This is because Singh used to be
a postdoc of Jordan. Another interesting trend is that papers on new fields of research are embedded
on the edges. For instance, papers on reinforcement learning (Barto, Singh and Sutton), are along
the left edge. This is consistent with the fact that they presented some interesting new results during
this period (recall that the time period of the dataset is 1987 to 1999).
Note that while MUHSIC groups papers according to authors, thereby preserving the macroscopic
structure of the data it also reveals the microscopic semantics between the papers. For instance, the
4 papers (numbered from 6 to 9 in Figure 3) by Smola, Scholk?opf, Hinton and Dayan are very close
to each other. Although their titles do not convey strong similarity information, these papers all used
handwritten digits for the experiments. A second example are papers by Dayan. Although most of
his papers are on the neuroscience side, two of his papers (numbered 14 and 15) on reinforcement
learning can be found on the machine learning side. A third example are papers by Bialek and
Hinton on spiking neurons (numbered 20, 21 and 23). Although Hinton?s papers are mainly on the
left, his paper on spiking Boltzmann machines is closer to Bialek?s two papers on spiking neurons.
8
Discussion
In summary, MUHSIC provides an embedding of the data which preserves side information possibly
available at training time. This way we have a means of controlling which representation of the
data we obtain rather than having to rely on our luck that the representation found by MVU just
happens to match what we want to obtain. It makes feature extraction robust to spurious interactions
between observations and noise (see the appendix for an example of adjacency matrices and further
discussion). A fortuitous side-effect is that if the matrix containing side information is of low rank,
the reduced representation learned by MUHSIC can be lower rank than that obtained by MVU, too.
Finally, we showed that MVU and MUHSIC can be formulated as feature extraction for obtaining
maximally dependent features. This provides an information theoretic footing for the (brilliant)
heuristic of maximizing the trace of a covariance matrix [1].
The notion of extracting features of the data which are maximally dependent on the original data
is far more general than what we described in this paper. In particular, one may show that feature
selection [7] and clustering [8] can also be seen as special cases of this framework.
Acknowledgments NICTA is funded through the Australian Government?s Backing Australia?s
Ability initiative, in part through the ARC.This research was supported by the Pascal Network.
References
[1] K. Q. Weinberger, F. Sha, and L. K. Saul. Learning a kernel matrix for nonlinear dimensionality reduction.
In Proceedings of the 21st International Conference on Machine Learning, Banff, Canada, 2004.
[2] J. Sun, S. Boyd, L. Xiao, and P. Diaconis. The fastest mixing markove process on a graph and a connection
to a maximum variance unfolding problem. SIAM Review, 48(4):681?699, 2006.
[3] A. Gretton, O. Bousquet, A.J. Smola, and B. Sch?olkopf. Measuring statistical dependence with HilbertSchmidt norms. In S. Jain, H. U. Simon, and E. Tomita, editors, Proceedings Algorithmic Learning Theory,
pages 63?77, Berlin, Germany, 2005. Springer-Verlag.
[4] K. Weinberger, F. Sha, Q. Zhu, and L. Saul. Graph laplacian regularization for large-scale semidefinte
programming. In Neural Information Processing Systems, 2006.
[5] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2004.
[6] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood component analysis. In
Advances in Neural Information Processing Systems 17, 2004.
[7] L. Song, A. Smola, A. Gretton, K. Borgwardt, and J. Bedo. Supervised feature selection via dependence
estimation. In Proc. Intl. Conf. Machine Learning, 2007.
[8] L. Song, A. Smola, A. Gretton, and K. Borgwardt. A dependence maximization view of clustering. In
Proc. Intl. Conf. Machine Learning, 2007.
8
| 3307 |@word version:1 norm:4 confirms:1 covariance:4 eng:1 invoking:1 thereby:1 tr:25 fortuitous:1 reduction:4 initial:1 united:1 document:10 rkhs:1 interestingly:1 existing:1 ka:1 com:1 current:1 exy:2 stemmed:1 intriguing:1 written:2 goldberger:1 subsequent:1 happen:1 shape:1 christian:1 remove:1 drop:1 sys:2 vanishing:2 footing:1 colored:8 provides:4 location:2 banff:1 firstly:1 stopwords:1 along:1 direct:1 become:1 initiative:1 consists:2 combine:1 x0:4 karsten:1 mpg:1 salakhutdinov:1 window:3 considering:1 becomes:1 provided:2 begin:1 moreover:2 bounded:2 maximizes:1 xx:1 null:2 what:3 finding:1 hindsight:1 prxy:4 assert:1 bedo:1 rm:5 uk:1 control:1 appear:2 arguably:1 before:3 positive:2 local:7 tends:1 consequence:1 sutton:2 usenet:3 mach:1 approximately:1 might:3 au:2 suggests:3 relaxing:1 co:2 fastest:1 limited:1 range:1 practical:1 unique:1 acknowledgment:1 practice:1 definite:1 digit:9 universal:1 empirical:1 projection:1 boyd:1 word:3 induce:1 regular:1 numbered:3 cannot:1 close:3 convenience:1 operator:2 selection:2 mvu:34 applying:1 descending:1 www:1 equivalent:3 imposed:1 lagrangian:1 missing:1 maximizing:3 center:1 layout:1 resolution:1 insight:2 orthonormal:1 his:4 embedding:23 notion:1 justification:1 hsic:14 hierarchy:3 controlling:1 programming:2 us:2 origin:1 trend:1 recognition:1 rec:4 corroborated:1 database:1 bottom:3 d2ij:8 sun:2 connected:1 luck:1 vanishes:3 complexity:2 cam:1 ultimately:1 singh:3 distinctive:1 baseball:1 usps:5 joint:1 various:1 talk:2 separated:1 jain:1 fast:1 effective:1 sejnowski:2 artificial:1 outcome:1 choosing:2 neighborhood:2 hkb:2 header:1 heuristic:3 quite:2 ability:1 jointly:1 itself:1 highlighted:3 final:1 advantage:2 eigenvalue:12 interaction:1 maximal:2 product:1 zm:1 neighboring:1 relevant:2 inserting:1 combining:1 motorcycle:1 mixing:2 roweis:1 olkopf:2 cluster:1 requirement:2 intl:2 produce:7 generating:1 illustrate:2 derive:1 ac:1 clearer:1 ij:6 nearest:6 solves:2 soc:1 strong:3 come:1 implies:1 australian:1 kuhn:1 drawback:2 annotated:1 filter:1 subsequently:2 kb:6 centered:1 australia:3 adjacency:6 kii:6 require:2 government:1 karush:1 clustered:2 decompose:1 biological:1 tighter:2 secondly:1 exx0:2 hold:1 lying:1 sufficiently:1 algorithmic:2 major:2 achieves:1 estimation:3 proc:2 bag:1 label:4 title:1 ex0:1 grouped:2 vice:1 tf:2 reflects:3 weighted:2 unfolding:8 fukumizu:1 clearly:2 aim:2 rather:2 varying:1 barto:2 encode:1 rank:4 mainly:1 contrast:1 dependent:3 dayan:4 spurious:1 relation:2 relegated:1 wij:9 transformed:1 interested:1 germany:2 semantics:2 pixel:3 overall:2 dual:5 classification:2 backing:1 among:1 priori:1 retaining:1 pascal:1 html:1 constrained:1 special:1 field:1 construct:5 never:1 having:2 extraction:2 represents:1 look:1 inherent:1 primarily:1 employ:1 diaconis:1 simultaneously:1 tightly:1 national:1 individual:5 preserve:5 hkhl:6 attempt:1 freedom:1 neuroscientist:1 elucidating:1 recreation:1 semidefinite:5 pc:1 primal:1 chain:1 edge:5 closer:1 arthur:2 iv:1 euclidean:4 re:1 delete:1 instance:11 kij:6 measuring:1 maximization:2 stacking:1 mac:1 applicability:1 subset:1 entry:1 examining:1 graphic:2 too:1 considerably:1 st:1 borgwardt:3 density:1 international:1 siam:1 retain:3 ym:1 together:1 continuously:1 nonvanishing:1 recreational:1 satisfied:1 containing:3 choose:2 possibly:1 corner:2 cognitive:1 conf:2 derivative:1 leading:1 toy:1 account:2 de:1 diversity:1 depends:1 later:1 view:2 try:1 recover:1 simon:1 kcxy:1 formed:2 square:2 minimize:1 cxy:3 variance:9 largely:1 likewise:1 maximized:1 yield:1 correspond:1 yellow:1 handwritten:1 produced:6 comp:5 apple:1 cybernetics:1 researcher:4 confirmed:1 khlh:1 submitted:1 explain:1 whenever:1 centering:3 crypt:1 frequency:2 tucker:1 associated:1 proof:1 gain:1 dataset:4 popular:1 recall:3 knowledge:1 color:6 dimensionality:4 hilbert:7 positioned:1 coloring:1 supervised:2 maximally:5 formulation:1 shrink:1 furthermore:3 just:1 smola:7 hand:1 web:1 parse:1 o:1 nonlinear:1 porter:1 slackness:1 indicated:1 scientific:1 hlh:9 effect:2 requiring:1 normalized:2 hence:3 regularization:3 nonzero:3 semantic:1 misc:1 adjacent:3 during:1 anything:1 mpi:1 authorship:2 criterion:5 m:1 scholk:1 theoretic:1 demonstrate:1 hka:4 reasoning:3 image:8 brilliant:2 common:1 spiking:3 overview:1 thirdly:1 interpretation:2 refer:1 significant:1 cambridge:2 versa:1 dot:2 funded:1 similarity:3 align:1 aligning:1 base:2 recent:1 showed:1 optimizes:1 apart:1 verlag:1 ubingen:1 inequality:1 preserving:6 seen:4 somewhat:2 maximize:5 period:2 ii:2 preservation:1 desirable:1 gretton:5 eji:1 match:1 offer:1 cross:2 bach:1 promotes:1 plugging:1 laplacian:6 variant:5 metric:5 coauthor:2 kernel:22 normalization:1 represent:2 preserved:3 whereas:2 want:4 diagram:1 source:6 macroscopic:1 sch:2 meaningless:1 unlike:1 file:1 subject:8 legend:1 jordan:4 extracting:2 near:1 leverage:1 revealed:1 iii:1 embeddings:2 easy:2 m6:1 newsgroups:7 independence:5 restrict:1 inner:1 idea:2 reduce:1 tradeoff:1 translates:2 grading:1 politics:2 pca:9 lesong:1 song:4 render:1 reformulated:1 clear:4 eigenvectors:4 amount:1 hardware:2 category:1 reduced:4 http:1 exist:1 delta:1 arising:1 popularity:1 correctly:1 neuroscience:2 group:3 key:3 four:2 drawn:2 changing:1 preprocessed:1 penalizing:1 graph:18 year:1 inverse:1 place:1 almost:1 reasonable:1 separation:1 patch:1 draw:1 appendix:4 comparable:1 toc:2 scanned:1 hilbertschmidt:2 constraint:6 idf:2 alex:2 bousquet:1 aspect:1 optimality:2 according:3 combination:1 remain:2 smaller:2 across:2 y0:2 modification:1 happens:1 hl:5 visualization:6 remains:1 slack:1 ey0:1 needed:1 tractable:1 available:4 apply:1 hierarchical:1 ocr:1 appropriate:1 spectral:1 distinguished:1 neighbourhood:1 schmidt:4 weinberger:2 symmetrized:1 eigen:1 original:5 top:2 denotes:1 clustering:2 tomita:1 graphical:1 giving:1 k1:4 ejj:1 unchanged:1 objective:4 added:1 sha:2 concentration:1 dependence:12 usual:1 traditional:1 bialek:3 unclear:1 microscopic:1 gradient:1 usyd:1 subspace:1 distance:22 sci:1 berlin:1 gun:1 topic:8 collected:1 tuebingen:1 eigenspectrum:4 nicta:2 equivalently:1 kingdom:1 hij:2 trace:3 design:2 implementation:2 boltzmann:1 perform:2 allowing:1 upper:2 diamond:1 observation:14 neuron:2 markov:1 datasets:2 arc:1 descent:1 situation:1 hinton:5 y1:1 reproducing:2 canada:1 ulterior:1 introduced:1 pair:2 required:1 kl:2 connection:3 z1:1 learned:4 heterogenous:1 nip:8 address:1 able:3 bar:4 below:4 usually:1 xm:1 program:3 including:1 overlap:1 natural:1 rely:1 zhu:1 scheme:2 brief:1 imply:1 numerous:1 carried:1 extract:2 auto:1 lij:1 text:4 prior:1 ict:1 review:1 opf:1 embedded:3 interesting:5 limitation:1 integrate:1 consistent:1 article:7 xiao:1 editor:1 share:2 obscure:1 ibm:1 row:1 summary:1 placed:2 supported:1 keeping:1 infeasible:2 side:23 allow:2 saul:3 neighbor:8 stemmer:1 taking:1 dimension:5 plain:1 default:1 kz:2 author:5 collection:1 refinement:1 preprocessing:2 reinforcement:4 far:2 cope:1 eyy0:1 hkh:1 approximate:2 emphasize:1 reveals:1 discriminative:1 spectrum:3 why:2 hockey:1 table:1 learn:1 robust:2 obtaining:1 kjj:6 expansion:1 necessarily:1 postdoc:1 domain:2 did:1 motivation:1 noise:1 convey:1 x1:1 canberra:1 representative:1 yy0:1 lie:2 weighting:1 mideast:1 third:1 embed:1 specific:1 insightful:1 grouping:1 incorporating:1 merging:1 effectively:1 gained:1 sparser:1 surprise:1 smoothly:1 simply:3 eij:6 likely:2 conveniently:1 highlighting:1 religion:2 sport:2 springer:1 corresponds:2 khs:1 satisfies:1 relies:1 vav:1 identity:4 goal:4 sorted:1 formulated:1 replace:1 shared:1 feasible:3 change:1 content:2 lemma:2 principal:1 called:1 coauthorship:1 meaningful:3 newsgroup:2 select:1 latter:1 arises:1 incorporate:2 |
2,545 | 3,308 | Cooled and Relaxed Survey Propagation for MRFs
Hai Leong Chieu1,2 , Wee Sun Lee2
1
Singapore MIT Alliance
2
Department of Computer Science
National University of Singapore
Yee-Whye Teh
Gatsby Computational Neuroscience Unit
University College London
ywteh@gatsby.ucl.ac.uk
haileong@nus.edu.sg,leews@comp.nus.edu.sg
Abstract
We describe a new algorithm, Relaxed Survey Propagation (RSP), for finding
MAP configurations in Markov random fields. We compare its performance with
state-of-the-art algorithms including the max-product belief propagation, its sequential tree-reweighted variant, residual (sum-product) belief propagation, and
tree-structured expectation propagation. We show that it outperforms all approaches for Ising models with mixed couplings, as well as on a web person
disambiguation task formulated as a supervised clustering problem.
1
Introduction
Energy minimization is the problem of finding a maximum a posteriori (MAP) configuration in a
Markov random field (MRF). A MAP configuration is an assignment of values to variables that
maximizes the probability (or minimizes the energy) in the MRF. Energy minimization has many
applications; for example, in computer vision it is used for applications such as stereo matching [11].
As energy minimization in general MRFs is computationally intractable, approximate inference algorithms based on belief propagation are often necessary in practice. Such algorithms are split into
two classes: max-product and variants address the problem by trying to find a MAP configuration
directly, while sum-product and variants return approximate marginal distributions which can be
used to estimate a MAP configuration. It has been shown that the max-product algorithm converges
to neighborhood optimums [18], while the sum-product algorithm converges to local minima of the
Bethe approximation [20]. Convergence of these algorithms are important for good approximations.
Recent work (e.g. [16, 8]) on sufficient conditions for convergence of sum-product algorithms suggests that they converge better on MRFs containing potentials with small strengths. In this paper,
we propose a novel algorithm, called Relaxed Survey Propagation (RSP), based on performing the
sum-product algorithm on a relaxed MRF. In the relaxed MRF, there is a parameter vector y that
can be optimized for convergence. By optimizing y to reduce the strengths of potentials, we show
empirically that RSP converges on MRFs where other algorithms fail to converge.
The relaxed MRF is built in two steps, by first (i) converting the energy minimization problem into
its weighted MAX-SAT equivalent [17], and then (ii) constructing a relaxed version of the survey
propagation MRF proposed in [14]. We prove that the relaxed MRF has approximately equal joint
distribution (and hence the same MAP and marginals) as the original MRF, independent (to a large
extent) of the parameter vector y. Empirically, we show that RSP, when run at low temperatures
(?cooled?), performs well for the application of energy minimization. For max-product algorithms,
we compare against the max-product algorithm and its sequential tree-reweighted variant, which
is guaranteed to converge [11]. For sum-product algorithms, we compare against residual belief
propagation [6] as a state-of-the-art asynchronous belief propagation algorithm, as well as the treestructured expectation propagation [15], which has been shown to be a special case of generalized
belief propagation [19]. We show that RSP outperforms all approaches for Ising models with mixed
couplings, as well as in a supervised clustering application for web person disambigation.
: factors
?1
: variables
?3
?(1,1)
Legend:
? is a negative literal in ?
?(2,1)
?1
?
b
c
?(1)
?3
?(1,2)
1
a
(a) G = (V, F )
2
?2
?4
?
?(2)
?2
?(2,2)
? is a positive literal in ?
?
?
?4
(b) W = (B, C)
Figure 1: The variables x1 , x2 in (a) are binary, resulting in 4 variables in (b). The clauses ?1 to ?4
in (b) are entries in the factor a in (a), ?1 and ?2 (resp. ?3 and ?4 ) are from b (resp. c). ?(1) and
?(2) are the positivity clauses. The relaxed MRF for RSP has a similar form to the graph in (b).
2
Preliminaries
A MRF, G = (V, F ), is defined by a set of variables V , and a set of factors F = {?a }, where
each ?a is a non-negative function depending on Xa ? V . We assume for simplicity that variables
in V have the same cardinality q, taking values in Q = {1, .., q}. For Xi ? V and Xa ? V ,
we denote by xi the event that Xi = xi , and by xa the event Xa = xa . To simplify notation,
we will sometimes write i ? V for XiQ? V , or a ? F for ?a ? F . The joint distribution over
configurations is defined by P (x) = Z1 a ?a (xa ) where Z is the normalization factor. When each
?a is a positive
function, the joint distribution can be written as P (x) = Z1 exp(?E(x)/T ) where
P
E(x) = a ? log ?a (xa ) is the energy function, and the temperature T is set to 1. A factor graph
[13] is a graphical representation of a MRF, in the form of a bipartite graph with two types of nodes,
the variables and the factors. Each factor ?a is connected to the variables in Xa , and each variable
Xi is connected to the set of factors, N (i), that depends on it. See Figure 1(a) for a simple example.
Weighted MAX-SAT conversion [17]: Before describing RSP, we describe the weighted MAXSAT (WMS) conversion of the energy minimization problem for a MRF. The WMS problem is a
generalization of the satisfiability problem (SAT). In SAT, a set of boolean variables are constrained
by a boolean function in conjunctive normal form, which can be treated as a set of clauses. Each
clause is a set of literals (a variable or its negation), and is satisfied if one of its literals evaluates to
1. The SAT problem consist of finding a configuration that satisfies all the clauses. In WMS, each
clause has a weight, and the WMS problem consists of finding a configuration with maximum total
weight of satisfied clauses (called the weight of the configuration). We describe the conversion [17]
of a MRF G = (V, F ) into a WMS problem W = (B, C), where B is the set of boolean variables
and C the set of weighted clauses. Without lost of generality, we normalize factors in F to take
values in (0, 1]. For each Xi ? V , introduce the variables ?(i,xi ) ? B as the predicate that Xi = xi .
For convenience, we index variables in B either by k or by (i, xi ), denote factors in F with Roman
alphabet (e.g. a, b, c) and clauses in C with Greek alphabet (e.g. ?, ?, ?). For a clause ? ? C, we
denote by C(?) as the set of variables in ?. There are two types of clauses in C: interaction and
positivity clauses.
Definition 1. Interaction clauses: For each entry ?a (xa ) in ?a ? F , introduce the clause ? =
?xi ?xa ?(i,xi ) with the weight w? = ? log(?a (xa )). We write ? @ a to show that the clause ?
comes from the factor ?a ? F , and we denote a = src(?) to be the factor ?a ? F for which ? @ a.
The violation of an interaction clause corresponding to ?a (xa ) entails that ?(i,xi ) = 1 for all xi ?
xa . This correspond to the event that Xi = xi for Xi ? Xa .
Definition 2. Positivity
P clauses: for Xi ? V , introduce the clause ?(i) = ?xi ?Q ?(i,xi ) with
weights w?(i) = 2 ? ??Ci w? , where Ci is the set of interaction clauses containing any variable in {?(i,xi ) }xi ?Q . For Xi ? V , we denote ?(i) as the corresponding positivity clause in C, and
for a positivity clause ? ? C, we denote src(?) for the corresponding variable in V .
Positivity clauses have large weights to ensure that for each Xi ? V , at least one predicate in
{?(i,xi ) }xi ?Q equals 1. To map ? back to a configuration in the original MRF, exactly one variable
in each set of {?(i,xi ) }xi ?Q can take the value 1. We call such configurations valid configurations:
Definition 3. A configuration is valid if ?Xi ? V , exactly one of the indicators {?i,xi }xi ?Q equals
1. There are two types of invalid configurations: MTO configurations where more than one variable
in the set {?i,xi }xi ?Q equals 1, and AZ configurations where all variables in the set equals zero .
For valid configurations ?, let x(?) be the corresponding configuration of ? in V .
For valid configurations ?, and for each a ? F , exactly one interaction clause in {?}?@a is violated:
when ? corresponding to ?a (xa ) is violated, we have Xa = xa in x(?). Valid configurations have
locally maximal weights [17]: MTO configurations have low weights since in all interaction clauses,
variables appear as negative literals. AZ configurations have low weights because they violate the
positivity clauses. See Figure 1 for an example of a WMS equivalent of a simple factor graph.
3
Relaxed Survey Propagation
In this section, we transform the WMS problem W = (B, C) into another MRF, Gs = (Vs , Fs ),
based on the construction of the MRF for survey propagation [14]. We show (in Section 3.1) that
under this framework, we are able to remove MTO configurations, and AZ configurations have
negligible probability. In survey propagation, in addition to the values {0, 1}, variables can take a
third value, * (?joker? state), signifying that the variable is free to take either 0 or 1, without violating
any clause. In this section, we assume that variables ?k take values in {0, 1, ?}.
Definition 4. [14] A variable ?k is constrained by the clause ? ? C if it is the unique
satisfying variable for clause ? (all other variables violate ?). Define CONk,? (?C(?) ) =
?(?k is constrained by ?), where ?(P ) equals 1 if the predicate P is true, and 0 otherwise.
We introduce the parameters {ya }a?F and {yi }i?V by modifying the definition of VAL in [14]:
Definition 5. An assignment ? is invalid for clause ? if and only if all variables are unsatisfying
except for exactly one for which ?k = ?. (In this case, ?k cannot take * as it is constrained). Define
?
if ?C(a) satisfies ?
?exp(w? )
VAL? (?C(?) ) = exp(?ysrc(?) ) if ?C(a) violates ?
(1)
? 0
if ?C(a) is invalid
The term exp(?ysrc(?) ) is the penalty for violating clauses, with src(?) ? V ? F defined in Definitions 1 and 2. For interaction clauses, we index ya by a ? F because among valid configurations,
exactly one clause in the group {?}?@a is violated, and exp(?ya ) becomes a constant factor. Positivity clauses are always satisfied and the penalty factor will not appear for valid configurations.
Definition 6. [14] Define the parent set Pi of a variable ?k to be the set of clauses for which ?k is
the unique satisfying variable, (i.e. the set of clauses constraining ?k ).
We now construct the MRF Gs = (Vs , Fs ) where variables ?k ? Vs are of the form ?k = (?k , Pk ),
with ?k variables in the WMS problem W = (B, C). (See Figure 1). We define single-variable
compatibilities (?k ) and clause compatibilities (?? ) as in [14]:
(
?0 if Pk = ?, ?k 6= ?
?? if Pk = ?, ?k = ?
?k (?k = {?k , Pk }) =
, where ?0 + ?? = 1 (2)
1 for any other valid (?k , Pk )
Y
?? (?? = {?k , Pk }k?C(?) ) = VAL? (?C(?) ) ?
?((? ? Pk ) = CON?,k (?C(?) )), (3)
k?C(?)
where ? is defined in Definition 4. The single-variable compatibilities ?k (?k ) are defined so that
when ?k is unconstrained (i.e. Pk = ?), ?k (?k ) takes the values ?? or ?0 depending on whether
?k equals *. The clause compatibilities introduce the clause weights and penalties into the joint
distribution. The factor graph has the following underlying distribution:
Y
Y
P ({?k , Pk }k ) ? ?0n0 ??n?
exp (w? )
exp(?ysrc(?) ),
(4)
??SAT(?)
??UNSAT(?)
where n0 is the number of unconstrained variables in ?, and n? the number of variables taking ?.
Comparing RSP with SP-? in [14], we see that
Theorem 1. In the limit where all ya , yi ? ?, RSP is equivalent to SP-? [14], with ? = ?? .
Taking y to infinity correspond to disallowing violated constraints, and SP-? was formulated for
satisfiable SAT problems,
Q where violated constraints are forbidden. In this case, all clauses must be
satisfied and the term ??C exp(w? ) in Equation 4 is a constant, and P (?) ? ?0n0 ??n? .
3.1
Main result
In the following, we assume the following settings: (1) ?? = 1 and ?0 = 0 ; (2) for positivity clauses
?(i), let yi = 0 ; and (3) in the original MRF G = (V, F ), single-variable factors are defined on
all variables (we can always define uniform factors). Under these settings, we will prove the main
result that the joint distribution on the relaxed MRF is approximately equal to that on the original
MRF, and that RSP estimates marginals on the original MRF. First, we prove the following lemma:
Lemma 1. The joint probability over valid configurations on Gs is proportional to the joint probability of the corresponding configurations on the original MRF, G = (V, F ).
Proof. For valid configurations, all positivity clauses are satisfied, and for each a ? F , all valid
configurations have one violated constraint
in the set of interaction clauses {?}P
?@a . Hence the
Q
penalty term for violated constraints a?F exp(ya ) is a constant factor. Let W = ??C w? be the
sum of all weights. For a valid configuration ?,
X
X
Y
P (?) ? exp(
w? ) = exp(W ?
w? ) ?
?a (x(?))
??SAT(?)
??UNSAT(?)
a?F
Lemma 2. All configurations containing * have zero probability in the MRF Gs , and there is a
one-to-one mapping between configurations ? = {?k , Pk }k?Vs and configurations ? = {?k }k?B
Proof. Single-variable factors on G translate into single-literal clauses in the WMS formulation,
which in turn becomes single-variable factors in Gs . For a variable ?k = (?k , Pk ) with a singlevariable factor, ?? , we have VAL? (?k = ?) = 0. This implies ?? (?k = (?, Pk )) = 0.
Lemma 3. MTO configurations have n0 6= 0 and since ?0 = 0, they have zero probability.
Proof. In MTO configurations, ?(i, xi , x0i ), ?i,xi = ?i,x0i = 1. The positivity clause ?(i) is hence
non-constraining for these variables, and since all other clauses connected to them are interaction
clauses and contain them as negative literals, both variables are unconstrained. Hence n0 6= 0, and
from Equation 4, for ?0 = 0, they have zero probability.
The above lemma lead to the following theorem:
Theorem 2. Assuming that exp(w?(i) ) 1 for all Xi ? V , the joint distribution over the relaxed
MRF Gs = (Vs , Fs ) is approximately equal to the joint distribution over the original MRF, G =
(V, F ). Moreover, RSP estimates the marginals on the original MRF, P
and at the fixed points, the
beliefs at each node, B(?(i,xi ) = 1), is an estimate of P (Xi = xi ), and xi ?Q B(?(i,xi ) = 1) ? 1.
We can understand the above theorem as follows: if we assume that the probability of AZ invalid configurations is negligible (equivalent to assuming that the probability of violating positivity
clauses are negligible, i.e. exp(wi ) exp(?ysrc(?(i)) ) = 1), then we have only valid configurations left. MTO invalid configurations are ruled out by Lemma 3. Since the positivity clauses have
large weights, exp(wi ) 1 are usually satisfied. Hence RSP, as the sum-product algorithm on the
relaxed MRF, returns estimations of the marginals P (Xi = xi ) as B(?(i,xi ) = 1).
3.2
Choosing y
Q
Valid configurations have a joint probability with the factor a?F exp(?ya ) while AZ configurations do not. However, Theorem 2 states that, if exp(wi ) 1, AZ configurations have negligible probability. Empirically,
we observe that for a large range of values of {ya }a?F , RSP returns
P
marginals satisfying xi B(?(i,xi ) = 1) ? 1, indicating that AZ configurations do indeed have
negligible probability. We can hence select the values of {ya }a?F for better convergence properties.
We describe heuristics based on the sufficient conditions for convergence of sum-product algorithms
in [16]. To simplify notations, we write the conditions for a MRF with pairwise factors ?a ,
P
maxXj ?V,b?N (j) a?N (j)\b N (?a ) < 1
0
0
(5)
? (x ,x ) ?a (x ,x )
where
N (?a ) = supxi 6=x0i supxj 6=x0j tanh 14 log ?aa (xi0 ,xjj ) ?a (xii ,x0j )
i
j
Mooij and Kappen [16] have also derived another condition based on the spectral radius of a matrix having N (?a ) as entries. These conditions lead us to believe that the sum-product algorithm
converges better on MRFs with small N (?a ) (or the ?strengths? of potentials in [8]). To calculate
N (?? ) for the interaction clause ?, we characterize these factors as follows:
exp(?ysrc(?) ) if clause ? is violated, i.e. (?k , ?l ) = (0, 0)
?? ((?k , Pk ), (?l , Pl )) =
(6)
exp(w? )
otherwise
P
P
As ya are shared among ? @ a, we choose ya to minimize ?@a N (?? ) = ?@a tanh 14 |w? +ya |.
A good approximation for ya would be the median of {?w? }?@a . For our experiments, we divide
the search range for ya into 10 bins, and use fminsearch in Matlab to find a local minimum.
3.3
Update equations and efficiency
While each message in RSP has large cardinality, we show in the supplementary material that,
under the settings of Section 3.1, the update equations can be simplified such that each factor passes
a single number to each variable. The interaction clause ? sends a number ???(i,v) to each (i, x) ?
C(?), and the positivity clauses ?(i) sends a number ??(i)?(i,x) to (i, x) for each x ? Q. The
update equations are as follows: (proofs in the supplementary material):
X
Y
???(i,x) =
???(i,x0 ) + exp(?wi )
(7)
x0 6=x ??N (i,x0 )\?(i)
Q
??(j)?(j,x0 ) + exp(?ysrc(?) ? w? ) ??N (j,x0 )\{?(j),?} ???(j,x0 )
Q
???(i,x) =
??(j)?(j,x0 ) + ??N (j,x0 )\{?(j),?} ???(j,x0 )
Y
B(?(i,x) = 0) ? ??(i)?(i,x) ; B(?(i,x) = 1) ?
???(i,x) ; B(?(i,x) = ?) = 0
(8)
(9)
??N (i,x)\?(i)
We found empirically that the schedule of message updates affect convergence to a large extent. A
good schedule is to update all the ?-messages first (by updating the groups of ?-messages belonging
to each factor a ? F together), and then updating the ?-messages together. This seems to work
better than the schedule defined by residual belief propagation [6] on the relaxed MRF.
In terms of efficiency, for a MRF with N pairwise factors, the sum-product algorithm has 2N q real
numbers in the factor to variable messages, and RSP has 2N q + q. Empirically, we observe that RSP
on the relaxed MRF runs as fast as the simple sum-product algorithm on the original MRF, with an
overhead for determining the values of y.
4
Experimental Results
While Ising models with attractive couplings are exactly solvable by graph-cut algorithms, general
Ising models with mixed couplings on complete graphs are NP-hard [4], and graph cut algorithms
are not applicable to graphs with mixed couplings [12]. In this section, we perform three sets of
experiments to show that RSP outperforms other approaches: the first set compares RSP and the
residual belief propagation on a simple graph, the second set compares the performance of various
methods on randomly generated graphs with mixed couplings, and the third set applies RSP to the
application of the web person disambiguation task.
A simple example: we use a 4-node complete graph of binary variables, with the two sets of factors
defined in Figure 2(a), for = +1 and -1. The case = ?1 was used in [8] to illustrate how the
strengths of potentials affect convergence of the sum-product algorithm. We also show the case of
= +1 (an attractive network) as a case where the sum-product algorithm converges well. Both sets
of graphs ( = +1 or ?1) have uniform marginals, and 2 MAP configurations (modes). In Figure
?
=
?
0
? 1
??
? "
"
1
0
1
"
"
1
0
"
?
"
" ?
?
" ?
0
!
exp(?i,j /4) if xi = xj
?i,j (xi , xj )=
exp(??i,j /4) if xi "= xj
(a) 4-node (binary) complete graph
(b) = +1
(c) = ?1
Figure 2: In Figure (a), we define factors under the two settings: = ?1. Figure (b) and (c) show
the L2 distance between the returned marginals and the nearest mode of the graph. Circles on the
lines mean failure to converge, where we take the marginals at the last iteration.
2(b) and 2(c), we show experimental results for = +1 and ?1. In each case, we vary ? from 0 to
12, and for each ?, run residual belief propagation (RBP) damped at 0.5 and RSP (undamped) on the
corresponding graph. Both methods are randomly initialized. We plot the L2 distance between the
returned marginals and the nearest mode marginals (marginals
? with probability one on the modes).
The correct marginals are uniform, where the L2 distance is 0.5 ? 0.7. For small ?, both methods
converge to the correct marginals. As ? is increased, for = +1 in Figure 2(b), both approaches
converge to marginals with probability 1 on one of the modes. For = ?1, however, RSP converges
again to marginals indicating a mode, while RBP faces convergence problems for ? ? 8.
Increasing ? corresponds to increasing N (?i,j ), and the sum-product algorithm fails to converge for
large ? when = ?1. When the algorithms converge for large ?, they converge not to the correct
marginals, but to a MAP configuration. Increasing ? has the same effect as decreasing the temperature of a network: the behavior of sum-product algorithm approaches that of the max-product
algorithm, i.e. the max-product algorithm is the sum-product algorithm at the zero temperature limit.
Ising models with mixed couplings: we conduct experiments on complete graphs of size 20
with different
P percentage ofPattractive couplings, using the Ising model with the energy function:
H(s) = ? i,j ?i,j si sj ? i ?i si ,where si ?{?1, 1}. We draw ?i from U [0, 0.1]. To control the
percentage of attractive couplings, we draw ?i,j from U [0, ?], and randomly assign negative signs
to the ?i,j with probability (1 ? ?), where ? is the percentage of attractive couplings required. We
vary ? from 1 to 3. In Figure 3, we plot the difference between the optimal energy (obtained with a
brute force search) and the energy returned by each of the following approaches: RSP, max-product
belief propagation (MBP), the convergent tree reweighted max product belief propagation (TRW-S)
[11], residual sum-product belief propagation (RBP) [6], and tree-structured expecation propagation
(TEP) [15]. Each point on the graph is the average over 30 randomly generated networks. In Table
1, we compare RSP against these methods. When an algorithm does not converge, we take its result
at the last iteration. We damp RBP and TEP with a 0.5 damping factor. For RSP, MBP, TRW-S and
RBP, we randomly initialize the initial messages, and take the best result after 5 restarts. For TEP,
we use five different trees consisting of a maximal spanning tree and four random stars [19]. For
RSP, RBP and TEP, which are variants of the sum product algorithm, we lower the temperature by
a factor of 2 each time the method converges and stop when the method fails to converge or if the
results are not improved over the last temperature. We observe that MBP outperforms TRW-S constantly: this agrees with [11] that MBP outperforms TRW-S for graphs with mixed couplings. While
the performance of TRW-S remains constant from 25% to 75%, the sum-product based algorithms
(RBP and TEP) improve as the percentage of attractive potentials is increased. In all three cases,
RSP is one of the best performing methods, beaten only by TEP at 2 points on the 50% graph. TEP,
being of the class of generalized belief propagation [19], runs significantly slower than RSP.
Supervised clustering: Finley and Joachims [7] formulated SV M cluster , which learns an itempair similarity measure, Sim(i, j), to minimize a correlation
P clustering objective on a training set. In
training SV M cluster , they have to minimize E(x) = i,j Sim(i, j)?(xi , xj ) where xi ? {1, .., U }
are cluster-ids of item i, and U an upper-bound on the number of clusters. They tried a greedy and
a linear programming approach, and concluded that the two approaches are comparable.
Due to time constraints, we did not implement SV M cluster : instead we test our inference algorithms
on the pairwise classification clustering (PCC) baseline in [7]. The PCC baseline trains svmlight [9]
on training item-pairs, and run the classifier through all test pairs. For each test pair (i, j), we apply
softmax to the classifier outputs to obtain the probability pi,j that the pair is in the same cluster.
(a) 75%
(b) 50%
(c) 25%
Figure 3: Experiments on the complete graph Ising model with mixed couplings (legend in (a)),
with different percentage of attractive couplings. The y-axis shows, in log scale, the average energy
difference between the configuration found by the algorithm and the optimal solution.
?
mbp
trw-s
rbp
tep
opt
1
2/0
26/0
1/0
2/0
0/0
75% attractive
1.5
2
2.5
2/0
0/0
1/0
24/0
22/0
25/0
0/0
0/0
2/0
2/0
0/0
2/0
0/0
0/0
0/1
3
1/0
25/0
0/0
0/0
0/0
1
7/6
28/0
22/0
14/3
0/7
50% attractive
1.5
2
2.5
11/5
14/0
10/2
29/0
29/0
27/0
14/2
12/0
9/1
9/3
11/2
6/2
0/8
0/2
0/2
3
9/6
28/1
13/5
6/5
0/7
1
20/2
29/0
22/0
23/1
0/6
25% attractive
1.5
2
2.5
13/3
16/0
13/3
27/0
30/0
28/1
16/6
15/2
21/0
15/4
10/2
16/2
0/10
0/4
0/4
3
15/2
27/0
17/0
15/2
0/2
Table 1: Number of trials (out of 30) where RSP does better/worse than various methods. In particular, the last row (opt) shows the number of times that RSP does worse than the optimal solution.
Defining Sim(i, j) = log(pi,j /(1 ? pi,j )), we minimize E(x) to cluster the test set. We found that
the various inference algorithms perform poorly on the MRF for large U , even when they converge
(probably due to a large number of minima in the approximation). We are able to obtain lower energy
configurations by the recursive 2-way partitioning procedure in [5] used for graph cuts. (Graph cuts
do not apply here as weights can be negative). This procedure involves recursively running, for e.g.
RSP, on the MRF for E(x) with U = 2, and applying the Kernighan-Lin algorithm [10] for local
refinements among current partitions. Each time RSP returns a configuration that partitions the data,
we run RSP on each of the two partitions. We terminate the recursion when RSP assigns a same
value to all variables, placing all remaining items in one cluster.
We use the web person disambiguation task defined in SemEval-2007 [1] as the test application.
Training data consists of 49 sets of web pages (we use 29 sets with more than 50 documents), where
each set (or domain) are results from a search query on a person name. The test data contains another
30 domains. Each domain is manually annotated into clusters, with each cluster containing pages
referring to a single individual. We use a simple feature filtering approach to select features that
are useful across many domains in the training data. Candidate features include (i) words occurring
in only one document of the document-pair, (ii) words co-ocurring in both documents, (iii) named
entity matches between the documents, and (iv) topic correlation features. For comparison, we
replace RSP with MBP and TRW-S as inference algorithms (we did not run RBP and TEP as they
are very slow on these problems because they often fail to converge). We also implemented the
greedy algorithm (Greedy) in [7]. We tried using the linear programming approach but free off-theshelf solvers seem unable to scale to these problems. Results comparing RSP with Greedy, MBP
and TRW-S are shown in Table 2. The F-measure attained by RSP for this SemEval task [1] is
equal to the systems ranked second and third out of 16 participants (official results yet unpublished).
We found that although TRW-S is guaranteed to converge, it performs poorly. RSP converges far
better than MBP, but due to the Kernighan-Lin corrections that we run at each iteration, results can
sometimes be corrected to a large extent by the local refinements.
Method
Number of test domains where RSP attains lower/higher energy E(x) than Method
Percentage of convergence over all runs
F-measure of purity and inverse purity [1]
RSP
0/0
91%
75.08%
MBP
9/6
74%
74.97%
TRW-S
16/7
100% *
74.61%
Greedy
22/5
74.78%
Table 2: Results for the web person disambiguation task. (*: TRW-S is guaranteed to converge)
5
Related work and conclusion
In this paper, we formulated RSP, generalizing the formulation of SP-? in [14]. SP-? is the sumproduct interpretation for the survey propagation (SP) algorithm [3]. SP has been shown to work well
for hard instances of 3-SAT, near the phase transition where local search algorithms fail. However,
its application has been limited to constraint satisfaction problems [3]. In RSP, we took inspiration
from the SP-y algorithm [2] in adding a penalty term for violated clauses. SP-y works on MAXSAT problems and SP can be considered as SP-y with y taken to ?, hence disallowing violated
constraints. This is analogous to the relation between RSP and SP-? [14] (See Theorem 1). RSP is
however different from SP-y since we address weighted MAX-SAT problems. Even if all weights
are equal, RSP is still different from SP-y, which, so far, does not have a sum-product formulation on
an alternative MRF. We show that while RSP is the sum-product algorithm on a relaxed MRF, it can
be used for solving the energy minimization problem. By tuning the strengths of the factors (based
on convergence criteria in [16]) while keeping the underlying distribution approximately correct,
RSP converges well even at low temperatures. This enables it to return low-energy configurations
on MRFs where other methods fail. As far as we know, this is the first application of convergence
criteria to aid convergence of belief propagation algorithms, and this mechanism can be used to
exploit future work on sufficient conditions for the convergence of belief propagation algorithms.
Acknowledgments
We would like to thank Yee Fan Tan for his help on the web person disambiguation task, and Tomas
Lozano-Perez and Leslie Pack Kaelbling for valuable comments on the paper. The research is partially supported by ARF grant R-252-000-240-112.
References
[1] ?Web person disambiguation task at SemEval,? 2007. [Online]. Available: http://nlp.uned.es/weps/taskdescription-2.html
[2] D. Battaglia, M. Kolar, and R. Zecchina, ?Minimizing energy below the glass thresholds,? Physical Review E, vol. 70, 2004.
[3] A. Braunstein, M. Mezard, and R. Zecchina, ?Survey propagation: An algorithm for satisfiability,? Random Struct. Algorithms, vol. 27, no. 2, 2005.
[4] B. A. Cipra, ?The Ising model is NP-complete,? SIAM News, vol. 33, no. 6, 2000.
[5] C. Ding, ?Spectral clustering,? ICML ?04 Tutorial, 2004.
[6] G. Elidan, I. McGraw, and D. Koller, ?Residual belief propagation: Informed scheduling for asynchronous
message passing,? in UAI, 2006.
[7] T. Finley and T. Joachims, ?Supervised clustering with support vector machines,? in ICML, 2005.
[8] T. Heskes, ?On the uniqueness of loopy belief propagation fixed points,? Neural Computation, vol. 16,
2004.
[9] T. Joachims, Learning to Classify Text Using Support Vector Machines: Methods, Theory and Algorithms.
Norwell, MA, USA: Kluwer Academic Publishers, 2002.
[10] B. Kernighan and S. Lin, ?An efficient heuristic procedure for partitioning graphs,? Bell Systems Technical Report, 1970.
[11] V. Kolmogorov, ?Convergent tree-reweighted message passing for energy minimization,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, 2006.
[12] V. Kolmogorov and R. Zabih, ?What energy functions can be minimized via graph cuts?? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, 2004.
[13] F. Kschischang, B. Frey, and H. Loeliger, ?Factor graphs and the sum-product algorithm,? IEEE Transactions on Information Theory, vol. 47, no. 2, 2001.
[14] E. Maneva, E. Mossel, and M. Wainwright, ?A new look at survey propagation and its generalizations,?
2004. [Online]. Available: http://arxiv.org/abs/cs.CC/0409012
[15] T. Minka and Y. Qi, ?Tree-structured approximations by expectation propagation,? in NIPS, 2004.
[16] J. M. Mooij and H. J. Kappen, ?Sufficient conditions for convergence of loopy belief propagation,? in
UAI, 2005.
[17] J. D. Park, ?Using weighted MAX-SAT engines to solve MPE,? in AAAI, 2002.
[18] Y. Weiss and W. T. Freeman, ?On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs,? IEEE Transactions on Information Theory, vol. 47, no. 2, 2001.
[19] M. Welling, T. Minka, and Y. W. Teh, ?Structured region graphs: Morphing EP into GBP,? in UAI, 2005.
[20] J. S. Yedidia, W. T. Freeman, and Y. Weiss, ?Constructing free-energy approximations and generalized
belief propagation algorithms,? IEEE Transactions on Information Theory, vol. 51, no. 7, 2005.
| 3308 |@word trial:1 version:1 pcc:2 seems:1 tried:2 recursively:1 kappen:2 initial:1 configuration:49 contains:1 loeliger:1 document:5 outperforms:5 current:1 comparing:2 ocurring:1 si:3 yet:1 conjunctive:1 written:1 must:1 partition:3 enables:1 remove:1 plot:2 update:5 n0:5 v:5 greedy:5 intelligence:2 item:3 node:4 org:1 five:1 prove:3 consists:2 overhead:1 introduce:5 x0:9 pairwise:3 indeed:1 behavior:1 freeman:2 decreasing:1 cardinality:2 increasing:3 becomes:2 solver:1 notation:2 underlying:2 maximizes:1 moreover:1 what:1 minimizes:1 informed:1 finding:4 zecchina:2 exactly:6 classifier:2 uk:1 control:1 unit:1 brute:1 partitioning:2 appear:2 grant:1 maneva:1 positive:2 negligible:5 before:1 frey:1 local:5 limit:2 id:1 approximately:4 suggests:1 co:1 limited:1 range:2 unique:2 acknowledgment:1 practice:1 lost:1 implement:1 recursive:1 procedure:3 braunstein:1 bell:1 significantly:1 matching:1 word:2 convenience:1 cannot:1 scheduling:1 applying:1 yee:2 equivalent:4 map:9 survey:10 tomas:1 simplicity:1 assigns:1 his:1 cooled:2 analogous:1 resp:2 construction:1 tan:1 programming:2 satisfying:3 updating:2 cut:5 ising:8 ep:1 ding:1 calculate:1 region:1 connected:3 sun:1 news:1 valuable:1 src:3 solving:1 bipartite:1 efficiency:2 joint:10 various:3 kolmogorov:2 alphabet:2 train:1 fast:1 describe:4 london:1 query:1 neighborhood:1 choosing:1 heuristic:2 supplementary:2 solve:1 otherwise:2 transform:1 online:2 ucl:1 propose:1 took:1 interaction:11 product:32 maximal:2 translate:1 poorly:2 normalize:1 az:7 convergence:14 parent:1 optimum:1 cluster:10 converges:9 help:1 coupling:13 depending:2 ac:1 illustrate:1 xiq:1 nearest:2 x0i:3 sim:3 implemented:1 c:1 involves:1 come:1 implies:1 lee2:1 greek:1 radius:1 correct:4 annotated:1 modifying:1 violates:1 material:2 bin:1 assign:1 generalization:2 preliminary:1 opt:2 pl:1 correction:1 considered:1 normal:1 exp:23 mapping:1 vary:2 theshelf:1 battaglia:1 uniqueness:1 estimation:1 applicable:1 tanh:2 treestructured:1 agrees:1 weighted:6 minimization:8 mit:1 always:2 arf:1 derived:1 joachim:3 attains:1 baseline:2 glass:1 posteriori:1 inference:4 mrfs:6 relation:1 koller:1 compatibility:4 maxxj:1 among:3 classification:1 html:1 art:2 special:1 constrained:4 initialize:1 marginal:1 field:2 equal:11 construct:1 having:1 softmax:1 manually:1 placing:1 park:1 look:1 icml:2 future:1 minimized:1 report:1 np:2 simplify:2 roman:1 wms:9 randomly:5 wee:1 national:1 individual:1 phase:1 consisting:1 negation:1 ab:1 message:9 violation:1 perez:1 damped:1 norwell:1 necessary:1 damping:1 tree:9 conduct:1 divide:1 iv:1 initialized:1 alliance:1 ruled:1 circle:1 increased:2 instance:1 classify:1 boolean:3 assignment:2 leslie:1 loopy:2 kaelbling:1 entry:3 uniform:3 predicate:3 characterize:1 conk:1 damp:1 sv:3 referring:1 person:8 siam:1 off:1 together:2 again:1 aaai:1 satisfied:6 containing:4 choose:1 positivity:14 literal:7 worse:2 return:5 potential:5 star:1 unsatisfying:1 depends:1 mpe:1 participant:1 satisfiable:1 minimize:4 correspond:2 comp:1 cc:1 definition:9 against:3 evaluates:1 energy:20 failure:1 minka:2 proof:4 con:1 stop:1 satisfiability:2 schedule:3 back:1 trw:11 attained:1 higher:1 supervised:4 violating:3 restarts:1 improved:1 wei:2 formulation:3 generality:1 xa:17 correlation:2 web:8 propagation:34 kernighan:3 mode:6 believe:1 name:1 effect:1 rbp:9 contain:1 true:1 usa:1 lozano:1 hence:7 inspiration:1 reweighted:4 attractive:9 criterion:2 generalized:3 whye:1 trying:1 tep:9 complete:6 performs:2 temperature:7 novel:1 empirically:5 clause:53 physical:1 xi0:1 interpretation:1 kluwer:1 marginals:16 tuning:1 unconstrained:3 heskes:1 entail:1 similarity:1 recent:1 forbidden:1 optimizing:1 binary:3 yi:3 minimum:3 relaxed:16 converting:1 purity:2 converge:15 elidan:1 ii:2 violate:2 technical:1 match:1 academic:1 lin:3 qi:1 variant:5 mrf:36 vision:1 expectation:3 arxiv:1 iteration:3 sometimes:2 normalization:1 addition:1 median:1 sends:2 concluded:1 publisher:1 pass:1 probably:1 comment:1 legend:2 seem:1 call:1 near:1 mbp:9 leong:1 split:1 constraining:2 svmlight:1 iii:1 semeval:3 affect:2 xj:4 reduce:1 whether:1 penalty:5 stereo:1 f:3 returned:3 xjj:1 passing:2 matlab:1 useful:1 locally:1 zabih:1 http:2 percentage:6 singapore:2 tutorial:1 sign:1 neuroscience:1 unsat:2 xii:1 write:3 vol:9 group:2 four:1 threshold:1 fminsearch:1 graph:28 sum:23 run:9 inverse:1 named:1 x0j:2 disambiguation:6 draw:2 comparable:1 bound:1 guaranteed:3 convergent:2 fan:1 g:6 strength:5 infinity:1 constraint:7 x2:1 ywteh:1 optimality:1 performing:2 structured:4 department:1 leews:1 belonging:1 across:1 wi:4 taken:1 computationally:1 equation:5 remains:1 describing:1 turn:1 fail:4 mechanism:1 know:1 available:2 yedidia:1 apply:2 observe:3 spectral:2 alternative:1 struct:1 slower:1 original:9 clustering:7 ensure:1 running:1 remaining:1 graphical:1 include:1 nlp:1 exploit:1 maxsat:2 objective:1 hai:1 distance:3 unable:1 thank:1 entity:1 topic:1 extent:3 spanning:1 assuming:2 index:2 kolar:1 minimizing:1 cipra:1 negative:6 perform:2 teh:2 conversion:3 upper:1 markov:2 defining:1 arbitrary:1 sumproduct:1 pair:5 required:1 unpublished:1 optimized:1 z1:2 gbp:1 engine:1 nu:2 nip:1 address:2 able:2 usually:1 below:1 pattern:2 built:1 including:1 max:14 belief:21 wainwright:1 event:3 satisfaction:1 treated:1 force:1 ranked:1 indicator:1 solvable:1 residual:7 recursion:1 improve:1 mossel:1 mto:6 axis:1 finley:2 text:1 review:1 sg:2 l2:3 morphing:1 val:4 mooij:2 determining:1 mixed:8 proportional:1 filtering:1 undamped:1 sufficient:4 joker:1 pi:4 row:1 supported:1 last:4 asynchronous:2 free:3 keeping:1 understand:1 taking:3 face:1 valid:14 transition:1 refinement:2 simplified:1 far:3 welling:1 transaction:5 sj:1 approximate:2 mcgraw:1 uai:3 sat:11 xi:51 search:4 table:4 bethe:1 terminate:1 pack:1 kschischang:1 constructing:2 domain:5 official:1 sp:14 pk:13 main:2 did:2 x1:1 singlevariable:1 gatsby:2 slow:1 aid:1 fails:2 mezard:1 candidate:1 third:3 learns:1 theorem:6 beaten:1 intractable:1 consist:1 sequential:2 adding:1 ci:2 occurring:1 generalizing:1 rsp:46 partially:1 applies:1 aa:1 corresponds:1 satisfies:2 constantly:1 ma:1 formulated:4 invalid:5 shared:1 replace:1 hard:2 except:1 corrected:1 lemma:6 called:2 total:1 experimental:2 ya:13 e:1 indicating:2 select:2 college:1 support:2 signifying:1 violated:10 disallowing:2 |
2,546 | 3,309 | The Infinite Gamma-Poisson Feature Model
Michalis K. Titsias
School of Computer Science,
University of Manchester, UK
mtitsias@cs.man.ac.uk
Abstract
We present a probability distribution over non-negative integer valued matrices
with possibly an infinite number of columns. We also derive a stochastic process
that reproduces this distribution over equivalence classes. This model can play
the role of the prior in nonparametric Bayesian learning scenarios where multiple
latent features are associated with the observed data and each feature can have
multiple appearances or occurrences within each data point. Such data arise naturally when learning visual object recognition systems from unlabelled images.
Together with the nonparametric prior we consider a likelihood model that explains the visual appearance and location of local image patches. Inference with
this model is carried out using a Markov chain Monte Carlo algorithm.
1
Introduction
Unsupervised learning using mixture models assumes that one latent cause is associated with each
data point. This assumption can be quite restrictive and a useful generalization is to consider factorial
representations which assume that multiple causes have generated the data [11]. Factorial models
are widely used in modern unsupervised learning algorithms; see e.g. algorithms that model text
data [2, 3, 4]. Algorithms for learning factorial models should deal with the problem of specifying
the size of the representation. Bayesian learning and especially nonparametric methods such as the
Indian buffet process [7] can be very useful for solving this problem.
Factorial models usually assume that each feature occurs once in a given data point. This is inefficient to model the precise generation mechanism of several data such as images. An image can
contain views of multiple object classes such as cars and humans and each class may have multiple
occurrences in the image. To deal with features having multiple occurrences, we introduce a probability distribution over sparse non-negative integer valued matrices with possibly an unbounded
number of columns. Each matrix row corresponds to a data point and each column to a feature
similarly to the binary matrix used in the Indian buffet process [7]. Each element of the matrix
can be zero or a positive integer and expresses the number of times a feature occurs in a specific
data point. This model is derived by considering a finite gamma-Poisson distribution and taking
the infinite limit for equivalence classes of non-negative integer valued matrices. We also present a
stochastic process that reproduces this infinite model. This process uses the Ewens?s distribution [5]
over integer partitions which was introduced in population genetics literature and it is equivalent to
the distribution over partitions of objects induced by the Dirichlet process [1].
The infinite gamma-Poisson model can play the role of the prior in a nonparametric Bayesian learning scenario where both the latent features and the number of their occurrences are unknown. Given
this prior, we consider a likelihood model which is suitable for explaining the visual appearance and
location of local image patches. Introducing a prior for the parameters of this likelihood model, we
apply Bayesian learning using a Markov chain Monte Carlo inference algorithm and show results in
some image data.
2
The finite gamma-Poisson model
Let X = {X1 , . . . , XN } be some data where each data point Xn is a set of attributes. In section
4 we specify Xn to be a collection of local image patches. We assume that each data point is
associated with a set of latent features and each feature can have multiple occurrences. Let znk
denote the number of times feature k occurs in the data point Xn . Given K features, Z = {znk } is
a N ? K non-negative integer valued matrix that collects together all the znk values so as each row
corresponds to a data point and each column to a feature. Given that znk is drawn from a Poisson
with a feature-specific parameter ?k , Z follows the distribution
P (Z|{?k }) =
N Y
K
K
k
Y
Y
?zknk exp{??k }
?m
k exp{?N ?k }
,
=
QN
znk !
n=1
n=1 znk !
k=1
(1)
k=1
PN
where mk = n=1 znk . We further assume that each ?k parameter follows a gamma distribution
that favors sparsity (in a sense that will be explained shortly):
?
?K
?
G(?k ; , 1) = k
K
?1
exp{??k }
.
?
?( K
)
(2)
The hyperparameter ? itself is given a vague gamma prior G(?; ?0 , ?0 ). Using the above equations
we can easily integrate out the parameters {?k } as follows
P (Z|?) =
K
Y
k=1
?(mk +
?
K)
?
?
?( K
)(N + 1)mk + K
QN
n=1 znk !
,
(3)
which shows that given the hyperparameter ? the columns of Z are independent. Note that the above
distribution is exchangeable since reordering the rows of Z does not alter the probability. Also as
K increases the distribution favors sparsity. This can be shown by taking the expectation of the sum
PN
of all elements of Z. Since the columns are independent this expectation is K n=1 E(znk ) and
E(znk ) is given by
?
X
? 1
?
E(znk ) =
znk N B(znk ; , ) = ,
(4)
K
2
K
z =0
nk
where N B(znk ; r, p), with r > 0 and 0 < p < 1, denotes the negative binomial distribution over
positive integers
?(r + znk ) r
p (1 ? p)znk ,
(5)
N B(znk ; r, p) =
znk !?(r)
that has a mean equal to r(1?p)
. Using Equation (4) the expectation of the sum of znk s is ?N and
p
is independent of the number of features. As K increases, Z becomes sparser and ? controls the
sparsity of this matrix.
There is an alternative way of deriving the joint distribution P (Z|?) according to the following
generative process:
?
(?1 , . . . , ?K ) ? D
, ? ? G(?; ?, 1),
K
Y
K
Ln
Ln ? P oisson(?), (zn1 , . . . , znK ) ?
?kznk , n = 1, . . . , N,
zn1 . . . znK
k=1
?
D( K
)
where
denotes the symmetric Dirichlet. Marginalizing out ? and ? gives rise to the same
distribution P (Z|?). The above process generates a gamma random variable and multinomial parameters and then samples the rows of Z independently by using the Poisson-multinomial pair. The
connection with the Dirichlet-multinomial pair implies that the infinite limit of the gamma-Poisson
model must be related to the Dirichlet process. In the next section we see how this connection is
revealed through the Ewens?s distribution [5].
Models that combine gamma and Poisson distributions are widely applied in statistics. We point out
that the above finite model shares similarities with the techniques presented in [3, 4] that model text
data.
3
The infinite limit and the stochastic process
To express the probability distribution in (3) for infinite many features K we need to consider equivalence classes of Z matrices similarly to [7]. The association of columns in Z with features defines
an arbitrary labelling of the features. Given that the likelihood p(X|Z) is not affected by relabelling
the features, there is an equivalence class of matrices that all can be reduced to the same standard
form after column reordering. We define the left-ordered form of non-negative integer valued matrices as follows. We assume that for any possible znk holds znk ? c ? 1, where c is a sufficiently
large integer. We define h = (z1k . . . zN k ) as the integer number associated with column k that is
expressed in a numeral system with basis c. The left-ordered form is defined so as the columns of Z
appear from left to right in a decreasing order according to the magnitude of their numbers.
Starting from Equation (3) we wish to define the probability distribution over matrices constrained in
a left-ordered standard form. Let Kh be the multiplicity of the column with number h; for example
K0 is the number of zero columns. An equivalence class [Z] consists of PcNK!
different matri?1
h=0
Kh !
ces that they are generated from the distribution in (3) with equal probabilities and can be reduced
to the same left-ordered form. Thus, the probability of [Z] is
K!
P ([Z]) = PcN ?1
h=0
K
Y
?(mk +
?
K)
?
mk + K
?
)(N + 1)
Kh ! k=1 ?( K
QN
n=1 znk !
.
(6)
We assume that the first K+ features are represented i.e. mk > 0 for k ? K+ , while the rest K ?K+
features are unrepresented i.e. mk = 0 for k > K+ . The infinite limit of (6) is derived by following
a similar strategy with the one used for expressing the distribution over partitions of objects as a
limit of the Dirichlet-multinomial pair [6, 9]. The limit takes the following form:
QK+
? K+
1
k=1 (mk ? 1)!
P (Z|?) = PcN ?1
,
(7)
QK+ QN
m+?
(N + 1)
n=1 znk !
k=1
h=1 Kh !
PK+
where m = k=1
mk . This expression defines an exchangeable joint distribution over non-negative
integer valued matrices with infinite many columns in a left-ordered form. Next we present a sequential stochastic process that reproduces this distribution.
3.1
The stochastic process
The distribution in Equation (7) can be derived from a simple stochastic process that constructs
the matrix Z sequentially so as the data arrive one at each time in a fixed order. The steps of this
stochastic process are discussed below.
When the first data point arrives all the features are currently unrepresented. We sample feature
occurrences from the set of unrepresented features as follows. Firstly, we draw an integer number
g1 from the negative binomial N B(g1 ; ?, 21 ) which has a mean value equal to ?. g1 is the total
number of feature occurrences for the first data point. Given g1 , we randomly select a partition
(z11 , . . . , z1K1 ) of the integer g1 into parts1 , i.e. z11 + . . . + z1K1 = g1 and 1 ? K1 ? g1 , by
drawing from Ewens?s distribution [5] over integer partitions which is given by
P (z11 , . . . , z1K1 ) = ?K1
(1)
g1
Y
?(?)
g1 !
1
,
?(g1 + ?) z11 ? . . . ? z1K1 i=1 v (1) !
i
(8)
where vi is the multiplicity of integer i in the partition (z11 , . . . , z1K1 ). The Ewens?s distribution
is equivalent to the distribution over partitions of objects induced by the Dirichlet process and the
Chinese restaurant process since we can derive the one from the other using simple combinatorics
arguments. The difference between them is that the former is a distribution over integer partitions
while the latter is a distribution over partitions of objects.
Let Kn?1 be the number of represented features when the nth data point arrives. For each feature
k, with k ? Kn?1 , we choose znk based on the popularity of this feature in the previous n ? 1 data
1
The partition of a positive integer is a way of writing this integer as a sum of positive integers where order
does not matter, e.g. the partitions of 3 are: (3),(2,1) and (1,1,1).
points. This popularity is expressed by the total number of occurrences for the feature k which is
Pn?1
n
given by mk = i=1 zik . Particularly, we draw znk from N B(znk ; mk , n+1
) which has a mean
mk
value equal to n . Once we have sampled from all represented features we need to consider a
sample from the set of unrepresented features. Similarly to the first data point, we first draw an
n
integer gn from N B(gn ; ?, n+1
), and subsequently we select a partition of that integer by drawing
from the Ewens?s formula. This process produces the following distribution:
Q K+
1
? K+
k=1 (mk ? 1)!
P (Z|?) = Qg
,
(9)
QK+ QN
QgN (N )
m+?
(1)
1
! (N + 1)
n=1 znk !
k=1
i=1 vi ! ? . . . ?
i=1 vi
(n)
where {vi } are the integer-multiplicities for the nth data point which arise when we draw from
the Ewens?s distribution. Note that the above expression does not have exactly the same form as the
distribution in Equation (7) and is not exchangeable since it depends on the order the data arrive.
However, if we consider only the left-ordered class of matrices generated by the stochastic process
then we obtain the exchangeable distribution in Equation (7). Note that a similar situation arises
with the Indian buffet process.
3.2
Conditional distributions
When we combine the prior P (Z|?) with a likelihood model p(X|Z) and we wish to do inference over Z using Gibbs-type sampling, we need to express the conditionals of the form
P (znk |Z?(nk) , ?) where Z?(nk) = Z \ znk . We can derive such conditionals by taking limits
of the conditionals for the finite model or by using the stochastic process.
Suppose that for the currentPvalue of Z, there exist K+ represented features i.e. mk > 0 for
k ? K+ . Let m?n,k =
ek . When m?n,k > 0, the conditional of znk is given by
n
e6=n zn
N B(znk ; m?n,k , NN+1 ). In all different cases, we need a special conditional that samples from
new features2 and accounts for all k such that m?n,k = 0. This conditional draws an integer number from N B(gn ; a, NN+1 ) and then determines the occurrences for the new features by choosing a
partition of the integer gn using the Ewens?s distribution. Finally the conditional p(?|Z), which can
be directly expressed from Equation (7) and the prior of ?, is given by
? K+
p(?|Z) ? G(?; ?0 , ?0 )
.
(10)
(N + 1)?
Typically the likelihood model does not depend on ? and thus the above quantity is also the posterior
conditional of ? given data and Z.
4
A likelihood model for images
An image can contain multiple objects of different classes. Each object class can have more than
one occurrences, i.e. multiple instances of the class may appear simultaneously in the image. Unsupervised learning should deal with the unknown number of object classes in the images and also
the unknown number of occurrences of each class in each image separately. If object classes are the
latent features, what we wish to infer is the underlying feature occurrence matrix Z. We consider
an observation model that is a combination of latent Dirichlet allocation [2] and Gaussian mixture
models. Such a combination has been used before [12]. Each image n is represented by dn local
patches that are detected in the image so as Xn = (Yn , Wn ) = {(yni , wni ), i = 1, . . . , dn }. yni
is the two-dimensional location of patch i and wni is an indicator vector (i.e. is binary and satisfies
PL
?
?=1 wni = 1) that points into a set of L possible visual appearances. X, Y , and W denote all
the data the locations and the appearances, respectively. We will describe the probabilistic model
starting from the joint distribution of all variables which is given by
joint = p(?)P (Z|?)p({?k }|Z)?
"
#
dn
N
Y
Y
p(?n |Zn )p(mn , ?n |Zn )
P (sni |?n )P (wni |sni , {?k })p(yni |sni , mn , ?n ) .
n=1
(11)
i=1
2
Features of this kind are the unrepresented features (k > K+ ) as well as all the unique features that occur
only in the data point n (i.e. m?n,k = 0, but znk > 0).
Z
?
{?k }
(mn, ?n)
?n
sni
wni
yni
dn
N
Figure 1: Graphical model for the joint distribution in Equation (11).
The graphical representation of this distribution is depicted in Figure 1. We now explain all the
pieces of this joint distribution following the causal structure of the graphical model. Firstly, we
generate ? from its prior and then we draw the feature occurrence matrix Z using the infinite
gamma-Poisson prior P (Z|?). The matrix Z defines the structure for the remaining part of the
model. The parameter vector ?k = {?k1 , . . . , ?kL } describes the appearance of the local patches W
for the feature (object class) k. Each ?k is generated from a symmetric Dirichlet so as the whole
Q K+
set of {?k } vectors is drawn from p({?k }|Z) = k=1
D(?k |?), where ? is the hyperparameter of
the symmetric Dirichlet and it is common for all features. Note that the feature appearance parameters {?k } depend on Z only through the number of represented features K+ which is obtained by
counting the non-zero columns of Z.
The parameter vector ?n = {?nkj } defines the image-specific mixing proportions for the mixture
model associated with image n. To see how this mixture model arises, notice that a local patch in
image n belongs to a certain occurrence of a feature. We use the double index kj to denote the j
occurrence of feature k where j = 1, . . . , znk and k ? {e
k : znek > 0}. This mixture model has
PK +
Mn = k=1
znk components, i.e. as many as the total number of feature occurrences in image n.
The assignment variable sni = {skj
ni }, which takes Mn values, indicates the feature occurrence of
patch i. ?n is drawn from a symmetric Dirichlet given by p(?n |Zn ) = D(?n |?/Mn ), where Zn
denotes the nth row of Z and ? is a hyperparameter shared by all images. Notice that ?n depends
only on the nth row of Z.
The parameters (mn , ?n ) determine the image-specific distribution for the locations {yni } of the
local patches in image n. We assume that each occurrence of a feature forms a Gaussian cluster
of patch locations. Thus yni follows a image-specific Gaussian mixture with Mn components. We
assume that the component kj has mean mnkj and covariance ?nkj . mnkj describes object location
and ?nkj object shape. mn and ?n collect all the means and covariances of the clusters in the image
n. Given that any object can be anywhere in the image and have arbitrary scale and orientation,
(mnkj , ?nkj ) should be drawn from a quite vague prior. We use a conjugate normal-Wishart prior
for the pair (mnkj , ?nkj ) so as
p(mn , ?n |Zn ) =
Y
znk
Y
N (mnkj |?, ? ?nkj )W (??1
nkj |v, V ),
(12)
k:znk >0 j=1
where (?, ?, v, V ) are the hyperparameters shared by all features and images. The assignment sni
which determines the allocation of a local patch in a certain feature occurrence follows a multinoQ
Qznk
kj
mial: P (sni |?n ) = k:znk >0 j=1
(?nkj )sni . Similarly the observed data pair (wni , yni ) of a
local image patch is generated according to
P (wni |sni , {?k }) =
K+ L
Y
Y
k=1 ?=1
w?
?k?ni
Pznk
j=1
skj
ni
and
p(yni |sni , mn , ?n ) =
Y
znk
Y
skj
ni
[N (yni |mnkj , ?nkj )]
.
k:znk >0 j=1
The hyperparameters (?, ?, ?, ?, v, V ) take fixed values that give vague priors and they are not
depicted in the graphical model shown in Figure 1.
Since we have chosen conjugate priors, we can analytically marginalize out from the joint distribution all the parameters {?n }, {?k }, {mn } and {?n } and obtain p(X, S, Z, ?). Marginalizing
out the assignments S is generally intractable and the MCMC algorithm discussed next produces
samples from the posterior P (S, Z, ?|X).
4.1
MCMC inference
Inference with our model involves expressing the posterior P (S, Z, ?|X) over the feature occurrences Z, the assignments S and the parameter ?. Note that the joint P (S, Z, ?, X) factorizes
QN
according to p(?)P (Z|?)P (W |S, Z) n=1 P (Sn |Zn )p(Yn |Sn , Zn ) where Sn denotes the assignments associated with image n. Our algorithm uses mainly Gibbs-type sampling from conditional
posterior distributions. Due to space limitations we briefly discuss the main points of this algorithm.
The MCMC algorithm processes the rows of Z iteratively and updates its values. A single step can
PK+
new
old
change an element of Z by one so as |znk
? znk
| ? 1. Initially Z is such that Mn = k=1
znk ?
1, for any n which means that at least one mixture component explains the data of each image. The
proposal distribution for changing znk s ensures that this constraint is satisfied.
Suppose we wish to sample a new value for znk using the joint model p(S, Z, ?, X). Simply witting
P (znk |S, Z?(nk) , ?, X) is not useful since when znk changes the number of states the assignments
Sn can take also changes. This is clear since znk is a structural variable that affects the number of
PK+
components Mn = k=1
znk of the mixture model associated with image n and assignments Sn .
On the other hand the dimensionality of the assignments S?n = S \ Sn of all other images is not
affected when znk changes. To deal with the above we marginalize out Sn and we sample znk from
the marginalized posterior conditional P (znk |S?n , Z?(nk) , ?, X) which is computed according to
X
P (znk |S?n , Z?(nk) , ?, X) ? P (znk |Z?(nk) , ?)
P (W |S, Z)p(Yn |Sn , Zn )P (Sn |Zn ), (13)
Sn
where P (znk |Z?n,k , ?) for the infinite case is computed as described in section 3.2 while computing
the sum requires an approximation. This sum is a marginal likelihood and we apply importance
sampling using as an importance distribution the posterior conditional P (Sn |S?n , Z, W, Yn ) [10].
Sampling from P (Sn |S?n , Z, W, Yn ) is carried out by applying local Gibbs sampling moves and
global Metropolis moves that allow two occurrences of different features to exchange their data
clusters. In our implementation we consider a single sample drawn from this posterior distribution
so that the sum is approximated by P (W |Sn? , S?n , Z)p(Yn |Sn? , Zn ) and Sn? is a sample accepted
after a burn in period. Additionally to scans that update Z and S we add few Metropolis-Hastings
steps that update the hyperparameter ? using the posterior conditional given by Equation (10).
5
Experiments
In the first experiment we use a set of 10 artificial images. We consider four features that have
the regular shapes shown in Figure 2. The discrete patch appearances correspond to pixels and
can take 20 possible grayscale values. Each feature has its own multinomial distribution over the
appearances. To generate an image we first decide to include each feature with probability 0.5.
Then for each included feature we randomly select the number of occurrences from the range [1, 3].
For each feature occurrence we select the pixels using the appearance multinomial and place the
respective feature shape in a random location so that feature occurrences do not occlude each other.
The first row of Figure 2 shows a training image (left), the locations of pixels (middle) and the
discrete appearances (right). The MCMC algorithm was initialized with K+ = 1, ? = 1 and
zn1 = 1, n = 1, . . . , 10. The third row of Figure 2 shows how K+ (left) and the sum of all znk s
(right) evolve through the first 500 MCMC iterations. The algorithm in the first 20 iterations has
training image n
locations Yn
appearances Wn
1331
3230
0212
Figure 2: The first row shows a training image (left), the locations of pixels (middle) and the discrete
appearances (right). The second row shows the localizations of all feature occurrences in three
images. Below of each image the corresponding row of Z is also shown. The third row shows how
K+ (left) and the sum of all znk s (right) evolve through the first 500 MCMC iterations.
Figure 3: The left most plot on the first row shows the locations of detected patches and the bounding
boxes in one of the annotated images. The remaining five plots show examples of detections and
localizations of the three most dominant features (including the car-category) in five non-annotated
images.
visited the matrix Z that was used to generate the data and then stabilizes. For 86% of the samples
K+ is equal to four. For the state (Z, S) that is most frequently visited, the second row of Figure
2 shows the localizations of all different feature occurrences in three images. Each ellipse is drawn
using the posterior mean values for a pair (mnkj , ?nkj ) and illustrates the predicted location and
shape of a feature occurrence. Note that ellipses with the same color correspond to the different
occurrences of the same feature.
In the second experiment we consider 25 real images from the UIUC3 cars database. We used the
patch detection method presented in [8] and we constructed a dictionary of 200 visual appearances
by clustering the SIFT [8] descriptors of the patches using K-means. Locations of detected patches
are shown in the first row (left) of Figure 3. We partially labelled some of the images. Particularly,
for 7 out of 25 images we annotated the car views using bounding boxes (Figure 3). This allows
us to specify seven elements of the first column of the matrix Z (the first feature will correspond
to the car-category). These znk s values plus the assignments of all patches inside the boxes do not
change during sampling. Also the patches that lie outside the boxes in all annotated images are not
allowed to be part of car occurrences. This is achieved by applying partial Gibbs sampling updates
and Metropolis moves when sampling the assignments S. The algorithm is initialized with K+ = 1,
after 30 iterations stabilizes and then fluctuates between nine to twelve features. To keep the plots
uncluttered, Figure 3 shows the detections and localizations of only the three most dominant features
(including the car-category) in five non-annotated images. The red ellipses correspond to different
occurrences of the car-feature, the green ones to a tree-feature and the blue ones to a street-feature.
6
Discussion
We presented the infinite gamma-Poisson model which is a nonparametric prior for non-negative
integer valued matrices with infinite number of columns. We discussed the use of this prior for
unsupervised learning where multiple features are associated with our data and each feature can
have multiple occurrences within each data point. The infinite gamma-Poisson prior can be used for
other purposes as well. For example, an interesting application can be Bayesian matrix factorization
where a matrix of observations is decomposed into a product of two or more matrices with one of
them being a non-negative integer valued matrix.
References
[1] C. Antoniak. Mixture of Dirichlet processes with application to Bayesian nonparametric problems. The
Annals of Statistics, 2:1152?1174, 1974.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3, 2003.
[3] W. Buntime and A. Jakulin. Applying discrete PCA in data analysis. In UAI, 2004.
[4] J. Canny. GaP: A factor model for discrete data. In SIGIR, pages 122?129. ACM Press, 2004.
[5] W. Ewens. The sampling theory of selectively neutral alleles. Theoretical Population Biology, 3:87?112,
1972.
[6] P. Green and S. Richardson. Modelling heterogeneity with and without the Dirichlet process. Scandinavian Journal of Statistics, 28:355?377, 2001.
[7] T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS 18,
2006.
[8] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
[9] R. M. Neal. Bayesian mixture modeling. In 11th International Workshop on Maximum Entropy and
Bayesian Methods of Statistical Analysis, pages 197?211, 1992.
[10] M. A. Newton and A. E Raftery. Approximate Bayesian inference by the weighted likelihood bootstrap.
Journal of the Royal Statistical Society, Series B, 3:3?48, 1994.
[11] E. Saund. A multiple cause mixture model for unsupervised learning. Neural Computation, 7:51?71,
1995.
[12] E. Sudderth, A. Torralba, W. T. Freeman, and A. Willsky. Describing Visual Scenes using Transformed
Dirichlet Processes. In NIPS 18, 2006.
3
available from http://l2r.cs.uiuc.edu/?cogcomp/Data/Car/.
| 3309 |@word middle:2 briefly:1 proportion:1 covariance:2 series:1 yni:9 must:1 partition:13 shape:4 plot:3 update:4 zik:1 occlude:1 generative:1 blei:1 location:14 firstly:2 five:3 unbounded:1 relabelling:1 dn:4 ewens:8 constructed:1 consists:1 combine:2 inside:1 introduce:1 frequently:1 uiuc:1 freeman:1 decreasing:1 decomposed:1 considering:1 becomes:1 underlying:1 what:1 kind:1 kznk:1 z1k:1 sni:10 exactly:1 uk:2 exchangeable:4 control:1 appear:2 yn:7 positive:4 before:1 local:10 limit:7 jakulin:1 burn:1 plus:1 equivalence:5 specifying:1 collect:2 factorization:1 range:1 unique:1 bootstrap:1 regular:1 griffith:1 marginalize:2 applying:3 writing:1 equivalent:2 starting:2 independently:1 sigir:1 deriving:1 population:2 annals:1 play:2 suppose:2 us:2 element:4 recognition:1 particularly:2 approximated:1 skj:3 database:1 observed:2 role:2 ensures:1 depend:2 solving:1 titsias:1 localization:4 distinctive:1 basis:1 vague:3 easily:1 joint:9 k0:1 represented:6 describe:1 monte:2 detected:3 artificial:1 choosing:1 outside:1 quite:2 fluctuates:1 widely:2 valued:8 drawing:2 favor:2 statistic:3 g1:10 richardson:1 itself:1 product:1 canny:1 mixing:1 kh:4 manchester:1 double:1 cluster:3 produce:2 object:14 derive:3 ac:1 school:1 c:2 involves:1 implies:1 predicted:1 annotated:5 attribute:1 stochastic:9 subsequently:1 allele:1 human:1 oisson:1 numeral:1 explains:2 exchange:1 generalization:1 pl:1 hold:1 sufficiently:1 normal:1 exp:3 stabilizes:2 dictionary:1 torralba:1 purpose:1 currently:1 visited:2 mtitsias:1 weighted:1 gaussian:3 pn:3 factorizes:1 derived:3 modelling:1 likelihood:9 indicates:1 mainly:1 sense:1 inference:6 nn:2 typically:1 initially:1 zn1:3 transformed:1 pixel:4 orientation:1 constrained:1 special:1 marginal:1 equal:5 once:2 construct:1 having:1 ng:1 sampling:9 biology:1 unsupervised:5 alter:1 few:1 modern:1 randomly:2 gamma:12 simultaneously:1 detection:3 mixture:11 arrives:2 chain:2 cogcomp:1 partial:1 respective:1 tree:1 old:1 initialized:2 causal:1 theoretical:1 mk:14 instance:1 column:16 modeling:1 gn:4 zn:12 assignment:10 introducing:1 neutral:1 kn:2 twelve:1 international:2 probabilistic:1 together:2 satisfied:1 choose:1 possibly:2 wishart:1 ek:1 inefficient:1 account:1 matter:1 combinatorics:1 vi:4 depends:2 piece:1 view:2 lowe:1 saund:1 red:1 ni:4 qk:3 descriptor:1 correspond:4 bayesian:9 carlo:2 explain:1 naturally:1 associated:8 sampled:1 color:1 car:9 dimensionality:1 specify:2 box:4 anywhere:1 hand:1 hastings:1 defines:4 contain:2 former:1 analytically:1 symmetric:4 iteratively:1 neal:1 deal:4 during:1 pcn:2 image:47 common:1 multinomial:6 association:1 discussed:3 expressing:2 gibbs:4 similarly:4 scandinavian:1 similarity:1 add:1 dominant:2 posterior:9 own:1 belongs:1 scenario:2 certain:2 binary:2 determine:1 period:1 multiple:12 keypoints:1 infer:1 z11:5 uncluttered:1 unlabelled:1 ellipsis:2 qg:1 vision:1 expectation:3 poisson:11 iteration:4 achieved:1 proposal:1 conditionals:3 separately:1 sudderth:1 rest:1 induced:2 jordan:1 integer:26 structural:1 counting:1 revealed:1 wn:2 affect:1 restaurant:1 expression:2 pca:1 cause:3 nine:1 useful:3 generally:1 clear:1 features2:1 factorial:4 nonparametric:6 category:3 reduced:2 generate:3 http:1 exist:1 notice:2 popularity:2 blue:1 discrete:5 hyperparameter:5 affected:2 express:3 four:2 drawn:6 changing:1 ce:1 sum:8 unrepresented:5 arrive:2 place:1 decide:1 mial:1 patch:19 draw:6 matri:1 occur:1 constraint:1 scene:1 generates:1 argument:1 according:5 combination:2 conjugate:2 describes:2 metropolis:3 explained:1 invariant:1 multiplicity:3 ln:2 equation:9 discus:1 describing:1 mechanism:1 available:1 apply:2 occurrence:31 alternative:1 buffet:4 shortly:1 assumes:1 michalis:1 dirichlet:14 denotes:4 binomial:2 graphical:4 remaining:2 marginalized:1 include:1 clustering:1 newton:1 restrictive:1 k1:3 especially:1 chinese:1 ellipse:1 ghahramani:1 society:1 move:3 quantity:1 occurs:3 strategy:1 nkj:10 street:1 seven:1 willsky:1 index:1 negative:10 rise:1 implementation:1 unknown:3 observation:2 markov:2 finite:4 situation:1 heterogeneity:1 precise:1 arbitrary:2 introduced:1 pair:6 kl:1 connection:2 nip:2 usually:1 below:2 sparsity:3 l2r:1 including:2 green:2 royal:1 suitable:1 indicator:1 nth:4 mn:14 carried:2 raftery:1 kj:3 sn:15 text:2 prior:17 literature:1 evolve:2 marginalizing:2 reordering:2 generation:1 limitation:1 allocation:3 interesting:1 integrate:1 znk:59 share:1 row:16 genetics:1 allow:1 explaining:1 taking:3 sparse:1 xn:5 qn:6 collection:1 approximate:1 keep:1 reproduces:3 sequentially:1 global:1 uai:1 grayscale:1 latent:8 additionally:1 pk:4 main:1 whole:1 wni:7 arise:2 hyperparameters:2 bounding:2 allowed:1 x1:1 wish:4 lie:1 jmlr:1 third:2 formula:1 specific:5 sift:1 intractable:1 workshop:1 sequential:1 importance:2 magnitude:1 labelling:1 illustrates:1 nk:7 sparser:1 gap:1 entropy:1 depicted:2 simply:1 appearance:14 antoniak:1 visual:6 expressed:3 ordered:6 partially:1 corresponds:2 determines:2 satisfies:1 acm:1 conditional:10 labelled:1 shared:2 man:1 change:5 included:1 infinite:16 total:3 accepted:1 select:4 selectively:1 e6:1 latter:1 arises:2 scan:1 indian:4 mcmc:6 |
2,547 | 331 | Natural Dolphin Echo Recog~ition Using an Integrator
Gateway Network
Herbert L. Roitblat
Department of Psychology, University
of Hawaii, Honolulu, HI 96822
Patrick W. B Moore, Paul E.
Nachtigall, & Ralph H. Penner
Naval Ocean Systems Center, Hawaii
Laboratory, Kailua, Hawaii, 96734
Abstract
We have been studying the performance of a bottlenosed dolphin on
a delayed matching-to-sample task to gain insight into the processes and
mechanisms that the animal uses during echolocation. The dolphin
recognizes targets by emitting natural sonar signals and listening to the
echoes that return. This paper describes a novel neural network
architecture, called an integrator gateway network, that we have developed to account for this performance. The integrator gateway
network combines information from multiple echoes to classify targets
with about 90% accuracy. In contrast, a standard backpropagation
network performed with only about 63% accuracy.
1. INTRODUCTION
The study of animals can provide a very important source of information for the design of automated artificial systems such as robots and autonomous vehicles. Animals
have evolved in a real world, solving real problems, such as gathering and interpreting
essential information. We call the process of using animal studies to inform the design of artificial systems biomimetics because the artificial systems are designed as
mimics of biological ones.
273
274
Roitblat, Moore, Nachtigall, and
~nner
2. INVESTIGATIONS OF DOLPHIN ECHOLOCATION PERFORMANCE
Dolphin echolocation clicks emerge from the rounded forehead or melon as a highly
directional sound beam with 3 dB (half power) beamwidths of approximately 10? in
both the vertical and horizontal planes (Au, et al., 1986). Echolocation clicks have
peak energy at frequencies from 40 to 130 kHz with source levels of 220 dB re: 1 JJ Pa
at 1 m (Au, 1980; Moore & Pawloski, 1990). Bottlenosed dolphins have excellent directionally selective hearing (Au & Moore, 1984), spanning over 7 octaves, and can
detect frequencies as high as 150 kHz (Johnson, 1966).
3. BEHAVIORAL METHODS
We have been studying the performance of a bottlenosed dolphin on an echolocation
delayed matching-to-sample (DMTS) task (e.g., Nachtigall, 1980; Nachtigall, et al.,
1985; Roitblat, et al., 1990a; Moore, et al., 1990). In this task a sample stimulus is
presented underwater to a blindfolded dolphin. The dolphin is allowed to echolocate
on this object ad lib. The object is then removed from the water, and after a short
delay, three alternative objects are presented (the comparison stimuli). One of these
objects is identical to (matches) the sample object, and the dolphin is required to indicate the matching stimulus by touching a response wand in front of it. The object
that serves as sample and the location of the correct match vary randomly from trial
to trial.
Recent work has concentrated on performance with three sample and comparison
stimuli: (a) a PVC plastic tube, (b) a water-filled stainless steel sphere, and (c) a solid
aluminum cone (see Roitblat, et al., 1990a). On average the dolphin used 37.2 clicks
to identify the sample, and an average of 4.2 scans to examine the three comparison
stimuli. A scan is a train of clicks to a single stimulus ended either by the initiation of
a scan to another stimulus or by a cessation of clicking
The dolphin's scanning patterns were modeled using sequential sampling theory (see
also Roitblat, 1984). Simulations based on this model provide a reasonably good approximation of the dolphin's performance (Roitblat, et a1., 1990a). The simulation
differed from the dolphin's actual performance, however, in that it was less variable
than the live dolphin. We return to the problem of accounting for this difference in
variability below after considering some models of the details of echo recognition.
4. ARTIFICIAL NEURAL NE1WORKS
We have developed a series of neural-network models of dolphin echolocation processing (see also Gorman and Sejnowski, 1988). We (Moore, et al., 1990; Roitblat, et
al., 1989) trained a counterpropagation network (Hecht-Nielsen, 1987, 1988) to classify echoes represented by their spectra into categories corresponding to each of the
stimuli in our current stimulus set. The network correctly classified more than 95% of
these spectra. This classification suggests two things. First, the spectral information
Natural Dolphin Echo Recognition Using an Integrator Gateway Network
OUTPUT
???
fEAT\JA?
GATEWAY
???
???
Figure
1.
A
schematic
of
the
Integrator
Gateway
Network.
present in the echoes was sufficient to identify the targets on which the dolphin was
echolocating. Second, only a single echo was necessary to classify the target. Although the network could identify the target with only a single echo, the dolphin concurrently performing the same task emitted many clicks in identifying the same targets. Further investigation revealed that the clicks emitted by the dolphin were more
variable than our initial sample suggested (Roitblat, et a1., 1990b). This variability
provides one possible explanation for the high performance level, and low variability
of our initial model.
4.1 THE INTEGRATOR GATEWAY NE1WORK
Our integrator gateway network incorporates features of the sequential sampling
model described earlier, including the assumptions that the dolphin averages or sums
spectral information from successive echoes and continues to emit clicks and collect
returning echoes until it can classify the target producing those echoes with sufficient
confidence. It mimics the dolphin's strategy of using multiple echoes to identify each
target. Figure 1 shows schematic of the Integrator Gateway Network.
Network inputs were 30-dimensional spectral vectors containing echo amplitudes in
1.95 kHz wide frequency bins. The echoes were captured and digitized during the dolphin's matching-to-sample performance. In addition to the 30 bins of spectral information, each echo was also marked as to whether the echo was (1.00) or was not
275
276
Roitblat, Moore, Nachtigall, and
~nner
(0.00) at the start of an echo train. Recall that the dolphin directs a series of clicks to
one target at a time, so it seemed plausible to include information marking the start of
a click train. The frequency inputs were then passed to a scalar unit and to the integrator layer. The integrator layer also contained 30 units, connected to the frequency
units in the input layer in a corresponding one-to-one pattern. The connections to the
scalar unit were fIxed at lIn, where n is the number of frequency inputs. The weights
to the integrator layer were fIxed at 1.00. The output of the scalar unit, i.e., the sum
of all of its inputs, was passed to each unit in the integrator layer via a fIxed weight of
-1.00. The effect of this scalar unit was to subtract the average activity of the input
layer (neglecting the start-of-train marker) from the inputs to the integrator layer.
This subtraction preserved all of the relative activity information present in the inputs,
but kept the inputs within a manageable range.
The elements in the integrator layer computed a cumulative sum of the inputs they
received. The role of this layer was to accumulate and integrate information from
successive echo spectra. The outputs of the integrator layer were passed via fIxed
connections with 1.00 weights to corresponding units in the gateway layer. The integrator layer and the gateway layer each contained the same number of units. Each
unit in the gateway layer acted as a reset for the corresponding unit in the integrator
layer, and connected back to its corresponding unit with a weight of -1.00. Each unit
in the gateway layer employed a multiplicative transfer function that multiplied the
input from its corresponding unit in the integrator layer with the value of the start-oftrain marker. Because this marker had 1.00 activity at the start of a scan and 0.00 activity otherwise, it functioned as a reset signal, causing the units in the integrator layer
to be reset to 0.00 at the start of every scan; their previous activation level was subtracted from their input.
The output of the integrator layer also led via variable-weight connections to each of
the elements in the feature layer. The same kind of scalar unit that intervened between the input layer and integrator layer was also used between the integrator layer
and feature layer to subtract the average activity of the integrator layer, again to keep
activations within a manageable range. The outputs of the feature layer led via variable-weight connections to the classifIer layer. The elements in these two layers contained sigmoid transfer functions and were trained using a standard cumulative backpropagation algorithm with the epoch duration set to the number of training samples
(60).
The training set consisted of six sets of ten successive echoes each, selected from the
ends of haphazardly chosen echo trains. An equal number of cone, tube, and sphere
echoes were used. The training set was a relatively small subset (4%) of the total set
of available echoes (1,335).
4.2 INTEGRATOR GATEWAY RESULTS AND DISCUSSION
Figure 2 shows the results of generalization testing of the network in the form of a derived confidence measure. The network was given all 30 scans (10 scans of each tar-
Natural Dolphin Echo Recognition Using an Integrator Gateway Network
o
Sphere
ir:t: I.'?
??.?
,,'"
tf:9
1C ???
o~
.~,~~~----~~--~------~----------~
??????
??
9JtCIII1Yl Echoes
o
i
CcmI
I
Q: ???
8?.?
i?? 4
~ ??I
ofi I~---+--------------~----~---+----~--~----~
I
?
?
?
?
?
?
~
IIcCIII1?l Echoes
?
Tube
o
i
?
I
II: ?.?
~
...
as '.4
~ ?.I
~ .~~--~~~~~==~--~----------~--~
10
?
3D
..
.,
?
O.
SUccessive Echoes
?
?
Figure 2. Results of generalization testing of the network in the form of the confidence of the network in assigning the echo train to the proper category. See text.
get for a total of 1,335 sequential echoes), and was required to classify each echo
train. "Confidence" was defined as the ratio of the activation level of the correct classification versus the total output of the three classification units. A confidence ratio of
1.00 indicates that only the correct unit is active. Confidence of 0.00 indicates that the
correct unit is entirely inactive. Intermediate confidences correspond to intermediate
likelihood ratios (Qian & Scjnowski, 1988).
Recall that echo trains varied in length under control of the dolphin. Therefore, it is
not entirely clear how to measure the network performance. According to sequential
sampling theory (see Roitblat, et a1., 1990a) a rational decision maker collects echo
evidence only until a sufficiently confident classification is available and then stops.
Table 1 shows the number of clicks in each train that were required to reach a confidence ratio of 0.96 and the classification that the network derived. Some of the scans
ended before the network could achieve this confidence level. Three erroneous classifications were made (90% correct).
277
278
Roitblat, Moore, Nachtigall, and ~nner
Table 1
Number of Clicks to Network Confidence Criterion
Target Scanned
Sphere
Cone
Integrator Gateway
I6S
20C
9S
4C
2C
7S
6S
6C
I9S
l4C
19S
6C
34S
6C
7S
4C
23C
6C
3C
llC
Tube
40C
l8C
20T
23T
5T
4T
4T
4T
5T
4T
Sphere
lS
6S
IS
5S
l4S
I4S
3S
lS
40T
Cone
Tube
Backpropagation
lC
30C
IC
2C
2C
30S l
32Sl
57S
22S
22S
27T
3S
lS
llSl
IT
l4T
IT
IT
IT
2T
IT
Note: Entries are the number of clicks needed by the network to achieve the 0.96
confidence criterion. C indicates a Cone decision, S indicates a Sphere decision, T indicates a Tube decision. lIndicates that the dolphin stopped echolocating before the
network reached its confidence criterion. On these scans, the decision is the one with
the highest confidence at the end of the scan.
4.3 A SIMPLE BACKPROPAGATION NE1WORK
The integrator gateway network reflects the assumption of sequential sampling theory
that the dolphin combines information from successive echoes in deriving its identification. In contrast, a standard backpropagation network does not integrate over successive echoes, but instead attempts to identify each echo independently. A backpropagation network can be used as a model of a system that emits multiple clicks because the echoes vary in quality. Rather than integrating the echoes, it simply waits
for a single adequate echo that allows it to meet its confidence criterion.
We trained a backpropagation network (using the fast-backpropagation algorithm to
adjust the weights (Samad, 1988) on the same data that were submitted to the integrator network in order to determine whether the additional structure of the integrator network contributed to its performance accuracy. The network contained exactly
the same number of inputs, hidden units, outputs, and adjustable connections as the
integrator network. The networks differed only in absence of the integration apparatus in the backpropagation network.
Natural Dolphin Echo Recognition Using an Integrator Gateway Network
Sphere
.
.
SUccessive Echoes
Cone
?
10
10
!III
N- 1
?
so
20
?
?
..
C
!SO
SUccessive Echoes
Tube
SUccessive Echoes
&0
70
Figure 3. Confidence of the backpropagation network in assigning the echo train to
the proper category as a function of the number of echoes received.
4.4 BACKPROPAGATION RESULTS
Figure 3 shows the confidence of the backpropagation network in assigning the echo
train to the proper category as a function of the number of echoes received. Compared to the categorization performance of the integrator network, the backpropagation network was much more variable. As Figure 3 shows, the individual echoes were
highly variable, and frequently assigned to an erroneous category.
The performance of the backpropagation network when judged by the standards of
sequential sampling theory are also shown in Table 1. This table shows the number of
clicks necessary to first reach a classification with greater then 0.96 confidence. On
average the backpropagation network (11.57 echoes) reached its confidence criterion
in the about the same number of clicks (t (df = 58) = 0.03, p> .05) as the integrator
network (11.67 echoes), but it produced more errors (X2 (df=1) = 5.96).
These data suggest that the integrator network added significantly to the ability to
classify sequentially produced echoes. By implementing a signal "averaging" mechanism in the neural network the system could take advantage of the redundancy inherent in the use of multiple echoes from the same source and in the stochastic properties of the noise in which those echoes are embedded. In contrast, the backpropaga-
279
280
Roitblat, Moore, Nachtigall, and ~nner
tion network is required to process not only the characteristics of the echoes themselves, but also the characteristics of the noise. This results in many spurious classifications.
The gateway integrator network adds a level of complexity to the standard backpropagation network architecture that contributes substantially to its performance. Its design is inspired by properties of the dolphin's performance (Nachtigall & Moore,
1988) and it represents one step along a development path that seeks to include more
and more of the mechanisms that we can identify from the neurobiology of echolocation and from the performance of dolphins in their aquatic environment.
References
Au, W. W. L. (1980). Echolocation signals of the Atlantic bottlenose dolphin
(Tursiops truncatus) in open waters. In R. G. Busnel & J. F. Fish (Eds.) Animal
sonar systems. (pp. 251-282). New York: Plenum Press.
Au, W. W. L. & Moore, P. W. B. (1984). Receiving beam patterns and directivity indices of the Atlantic bottlenose dolphin Tlll'siops tnmcatlls. JOllmal of the Acoustical
Society of America, 75, 255-262.
Au, W. W. L., Moore, P. W. B. & Pawloski, D. (1986). Echolocating transmitting
beam of the Atlantic botllenose dolphin. JOllmal of the Acoustical Society of America,
80, 688-691.
Gorman, R. P. & Sejnowski, T. J. (1988). Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1, 75-89.
Hecht-Nielsen, R. (1987). Counterpropagation networks. Applied Optics, 26, 49794984.
Hecht-Nielsen, R. (1988).
Networks, 1, 131-139.
Applications of counterpropagation networks.
Neural
Johnson, C. S. (1966). Auditory thresholds of the bottlenosed porpoise, Tllrsiops
tnmcatltS (Montague) (Naval Ordnance Test Station Technical Publication No 4178).
Naval Ordnance Test Station.
Moore, P. W. B. & Pawloski, D. A. (1990). Investigations on the control of echolocation pulses in the dolphin (Tul's;opS tnmcatus). In J. Thomas & R. Kastelein (Eds.)
Sensory abilities of cetaceans. New York: Plenum. In press.
Moore, P. W. B., Roitblat, H. L., Penner, R. H., & Nachtigall, P. E., Recognizing Successive Dolphin Echoes with an Integrator Gateway Network. Submitted for publication.
Nachtigall, P. E. (1.980). Odontocete echolocation performance on object size, shape,
and material, In R. G. Busnel & J. F. Fish (Eds.), Animal SOllar Systems, pp. 71-95,
New York, Plenum Press.
Natural Dolphin Echo Recognition Using an Integrator Gateway Network
Nachtigall, P. E., & Moore, P. W. B. (Eds.) (1988). Animal sonar: Processes and
perfonnance. New York: Plenum.
Nachtigall, P. E., Patterson, S. A., & Bauer, G. B. (1985). Echolocation delayed
matching-to-sample in a bottlenose dolphin. Paper presented at the Sixth Biennial
Conference on the Biology of Marine Mammals, Vancouver, B.C., Canada. November.
Qian, N. & Sejnowski, T. J. (1988). Predicting the secondary structure of globular
proteins using neural network models. Journal of Molecular Biology, 202, 865-884.
Roitblat, H. L. (1984). Representations in pigeon working memory, In: H. L. Roitblat, T. G. Bever and H. S. Terrace (Eds.), Allimal cognition. Hillsdale, NJ: Erlbaum,
79-97.
Roitblat, H. L., Moore, P. W. B., Nachtigall, P. E., Penner, R. H., & Au, W. W. L.
(1989). Dolphin echolocation: Identification of returning echoes using a counterpropagation network. Proceedings of the First International Joint Conference on Neural
Networks. Washington, DC: IEEE Press.
Roitblat, H. L., Penner, R. H. & Nachtigall, P. E. (1990a). Matching-to-sample by an
echolocating dolphin. JOllrnal of Experimental Psychology: Animal Behavior Processes,
16,85-95.
Roitblat, H. L., Penner, R. H. & Nachtigall, P. E. (1990b). Attention and decision
making in echolocation matching-to-sample by a bottlenose dolphin (Tursiops tnmcalus): the microstructure of decision making. In J. Thomas & R. Kastelein (Eds.)
Sensory abilities of cetaceans. New York: Plenum. In press.
Samad, T. (1988). Back propagation is significantly faster if the expected value of the
source unit is used for update. International Neural Network Society Conferellce
Abstracts.
281
| 331 |@word trial:2 manageable:2 open:1 seek:1 simulation:2 pulse:1 accounting:1 mammal:1 solid:1 ne1work:2 initial:2 series:2 atlantic:3 current:1 activation:3 assigning:3 shape:1 designed:1 update:1 half:1 selected:1 plane:1 marine:1 short:1 provides:1 location:1 successive:10 along:1 terrace:1 combine:2 behavioral:1 expected:1 behavior:1 themselves:1 examine:1 frequently:1 integrator:37 inspired:1 actual:1 considering:1 lib:1 evolved:1 kind:1 substantially:1 developed:2 bottlenose:4 ended:2 nj:1 every:1 exactly:1 returning:2 classifier:1 control:2 unit:22 producing:1 before:2 apparatus:1 meet:1 path:1 approximately:1 au:7 suggests:1 collect:2 range:2 testing:2 backpropagation:16 pawloski:3 honolulu:1 significantly:2 matching:7 confidence:18 integrating:1 wait:1 protein:1 suggest:1 get:1 layered:1 judged:1 live:1 center:1 attention:1 duration:1 l:3 independently:1 identifying:1 qian:2 insight:1 deriving:1 autonomous:1 underwater:1 plenum:5 target:11 us:1 pa:1 element:3 recognition:5 continues:1 recog:1 role:1 connected:2 removed:1 highest:1 environment:1 complexity:1 trained:4 solving:1 patterson:1 joint:1 montague:1 represented:1 america:2 train:11 fast:1 sejnowski:3 artificial:4 plausible:1 otherwise:1 ition:1 ability:3 echo:56 directionally:1 advantage:1 reset:3 causing:1 achieve:2 dolphin:41 categorization:1 object:7 received:3 indicate:1 correct:5 stochastic:1 aluminum:1 material:1 implementing:1 bin:2 globular:1 ja:1 hillsdale:1 microstructure:1 generalization:2 investigation:3 tul:1 biological:1 sufficiently:1 ic:1 cognition:1 vary:2 maker:1 tf:1 reflects:1 concurrently:1 rather:1 tar:1 publication:2 derived:2 naval:3 directs:1 indicates:5 likelihood:1 contrast:3 detect:1 hidden:2 spurious:1 selective:1 ralph:1 classification:8 development:1 animal:8 integration:1 equal:1 washington:1 sampling:5 identical:1 represents:1 biology:2 mimic:2 stimulus:9 inherent:1 randomly:1 stainless:1 individual:1 delayed:3 attempt:1 highly:2 adjust:1 emit:1 neglecting:1 necessary:2 perfonnance:1 filled:1 re:1 stopped:1 classify:7 earlier:1 penner:5 hearing:1 subset:1 entry:1 delay:1 recognizing:1 johnson:2 erlbaum:1 front:1 scanning:1 confident:1 peak:1 international:2 ops:1 receiving:1 backpropaga:1 rounded:1 transmitting:1 again:1 tube:7 containing:1 nachtigall:15 hawaii:3 return:2 account:1 ad:1 performed:1 vehicle:1 multiplicative:1 tion:1 reached:2 start:6 ir:1 accuracy:3 characteristic:2 correspond:1 identify:6 directional:1 identification:2 roitblat:18 plastic:1 produced:2 classified:1 submitted:2 inform:1 reach:2 ed:6 sixth:1 energy:1 echolocation:13 frequency:6 pp:2 gain:1 rational:1 stop:1 emits:1 auditory:1 recall:2 nielsen:3 amplitude:1 back:2 response:1 until:2 working:1 horizontal:1 marker:3 propagation:1 cessation:1 quality:1 aquatic:1 effect:1 consisted:1 assigned:1 moore:16 laboratory:1 during:2 criterion:5 octave:1 interpreting:1 pvc:1 novel:1 sigmoid:1 khz:3 echolocating:4 forehead:1 accumulate:1 had:1 robot:1 gateway:21 patrick:1 add:1 recent:1 touching:1 initiation:1 herbert:1 captured:1 additional:1 greater:1 employed:1 subtraction:1 determine:1 signal:4 ii:1 multiple:4 sound:1 technical:1 match:2 faster:1 sphere:7 lin:1 molecular:1 hecht:3 a1:3 schematic:2 df:2 beam:3 preserved:1 addition:1 source:4 db:2 thing:1 incorporates:1 call:1 emitted:2 revealed:1 intermediate:2 iii:1 automated:1 psychology:2 architecture:2 click:15 listening:1 inactive:1 whether:2 six:1 passed:3 york:5 jj:1 adequate:1 clear:1 ten:1 concentrated:1 category:5 sl:1 fish:2 correctly:1 redundancy:1 threshold:1 kept:1 cone:6 sum:3 wand:1 decision:7 entirely:2 layer:28 hi:1 melon:1 activity:5 scanned:1 optic:1 x2:1 performing:1 relatively:1 acted:1 department:1 marking:1 according:1 describes:1 making:2 gathering:1 mechanism:3 needed:1 serf:1 end:2 studying:2 available:2 multiplied:1 haphazardly:1 spectral:4 ocean:1 subtracted:1 alternative:1 thomas:2 include:2 recognizes:1 society:3 ne1works:1 added:1 strategy:1 acoustical:2 spanning:1 water:3 length:1 modeled:1 index:1 ratio:4 steel:1 design:3 proper:3 adjustable:1 contributed:1 vertical:1 november:1 neurobiology:1 variability:3 digitized:1 dc:1 varied:1 station:2 canada:1 required:4 connection:5 functioned:1 suggested:1 below:1 pattern:3 including:1 memory:1 explanation:1 power:1 natural:6 predicting:1 bever:1 text:1 epoch:1 vancouver:1 relative:1 embedded:1 versus:1 integrate:2 sufficient:2 wide:1 emerge:1 bauer:1 llc:1 world:1 cumulative:2 seemed:1 sensory:2 made:1 emitting:1 feat:1 keep:1 active:1 sequentially:1 spectrum:3 sonar:4 table:4 reasonably:1 transfer:2 contributes:1 excellent:1 noise:2 paul:1 allowed:1 counterpropagation:4 tlll:1 differed:2 lc:1 clicking:1 intervened:1 erroneous:2 evidence:1 essential:1 samad:2 sequential:6 gorman:2 subtract:2 led:2 pigeon:1 simply:1 contained:4 scalar:5 biennial:1 marked:1 absence:1 averaging:1 called:1 total:3 secondary:1 experimental:1 ofi:1 scan:10 |
2,548 | 3,310 | Infinite State Bayesian Networks
Max Welling?, Ian Porteous, Evgeniy Bart?
Donald Bren School of Information and Computer Sciences
University of California Irvine
Irvine, CA 92697-3425 USA
{welling,iporteou}@ics.uci.edu, bart@caltech.edu
Abstract
A general modeling framework is proposed that unifies nonparametric-Bayesian
models, topic-models and Bayesian networks. This class of infinite state Bayes
nets (ISBN) can be viewed as directed networks of ?hierarchical Dirichlet
processes? (HDPs) where the domain of the variables can be structured (e.g. words
in documents or features in images). We show that collapsed Gibbs sampling can
be done efficiently in these models by leveraging the structure of the Bayes net
and using the forward-filtering-backward-sampling algorithm for junction trees.
Existing models, such as nested-DP, Pachinko allocation, mixed membership stochastic block models as well as a number of new models are described as ISBNs.
Two experiments have been performed to illustrate these ideas.
1
Introduction
Bayesian networks remain the cornerstone of modern AI. They have been applied to a wide range
of problems both in academia as well as in industry. A recent development in this area is a class
of Bayes nets known as topic models (e.g. LDA [1]) which are well suited for structured data such
as text or images. A recent statistical sophistication of topic models is a nonparametric extension
known as HDP [2], which adaptively infers the number of topics based on the available data.
This paper has the goal of bridging the gap between these three developments. We propose a general
modeling paradigm, the ?infinite state Bayes net? (ISBN), that incorporates these three aspects.
We consider models where the variables may have the nested structure of documents and images,
may have infinite discrete state spaces, and where the random variables are related through the
intuitive causal dependencies of a Bayes net. ISBN?s can be viewed as collections of HDP ?modules?
connected together to form a network. Inference in these networks is achieved through a two-stage
Gibbs sampler, which combines the ?forward-filtering-backward-sampling algorithm? [3] extended
to junction trees and the direct assignment sampler for HDPs [2].
2
Bayes Net Structure for ISBN
Consider observed random variables xA , {xa }, a = 1..A. These variables can take values
in an arbitrary domain. In the following we will assume that xa is sampled from a (conditional)
distribution in the exponential family. We will also introduce hidden (unobserved, latent) variables
{zb }, b = 1..B which will always take discrete values. The indices a, b thus index the nodes of the
Bayesian network.
We will introduce a separate index, e.g. na , to label observations. In the simplest setting we assume
IID data n = i, i.e. N independent identically distributed observations for each variable. We will
?
?
On sabbatical at Radboud University Nijmegen, Netherlands, Dept. of Biophysics.
Joint appointment at California Institute of Technology, USA, Dept. of Electrical Engineering.
1
N1
Ix |z z
1
J2
N2
1
2
Ix
D
2
2
Wz
J1
2
D
1
Wz
1
S z |z
3
|z 2
Sz
2
|z
4
1
z
D4
D3
Sz
Sz
4
3
zi
x i1
Wz
N
Ix |z
S z |j
Ix |z
x ji
z ji
xkji
N
4
D
xi2
2
zi 4
zi1
J
J
z i3
(a)
(b)
Wz
E
Uz |k
D
S z |kj
z kji
(c)
Figure 1: Graphical representation of (a) Unstructured infinite state Bayesian network, (b) HDP, (c) H2DP.
however also be interested in more structured data, such as words in documents, where the index
n can be decomposed into e.g. n = (j, ij ). In this notation we think of j as labelling a document
and ij as labelling a word in document j. To simplify notation we will often write n = (ji). It is
straightforward to generalize to deeper nestings of indices, e.g. n = (k, jk , ijk ) = (kji) where k
can index e.g. books, j chapters and i words. We interpret this as the observed structure in the data,
as opposed to the latent structure which we seek to infer. The unobserved structure is labelled with
the discrete ?assignment variables? zan which assign the object indexed by n to latent groups (a.k.a.
topics, factors, components).
The assignment variables z together with the observed variables x are organized into a Bayes net,
where dependencies are encoded by the usual ?conditional probability tables? (CPT), which we
denote with ?axa |?a for observed variables and ? bzb |?b for latent variables1 . Here, ?a denotes the
a
b
joint state of all the parent variables
P of x or z . When a vertical bar is present we normalize over
the variables to the left of it, e.g. xa ?axa |?a = 1, ?a, ?a . Note that CPTs are considered random
variables and may themselves be indexed by (a subset of) n, e.g. ?xa |?a j .
We assume that each ? b is sampled from a Dirichlet prior: e.g. ? zb |?b ? D[?b ? zb ] independently and identically for all values of ?b . The distribution ? itself is Dirichlet distributed,
? za ? D[? a /K a ], where K a is the number of states for variable za . We can put gamma priors on
?a , ? a and consider them as random variables as well, but to keep things simple we will consider
them fixed variables here. We refer to [4] for algorithms to learn them from data and to [5] and [2]
for ways to infer them through sampling. In section 5 we further discuss these hierarchical priors.
In drawing BNs we will not include the plates to avoid cluttering the figures. However, it is always
possible to infer the number of times variables in the BN are replicated by looking at its indices. For
instance, the variable node labelled with ? z1 |z2 j in Fig.3a stands for K (2) ? J IID copies of ? 1
sampled from ? 1 .
3
Networks of HDPs
In Fig.1b we have drawn the finite version of the HDP. Here ? is a distribution over words, one for
each topic value z, and is often referred to a ?topic distribution?. Topic values are generated from
a document specific distribution ? which in turn are generated from a ?mother distribution? over
topics ? . As was shown in [2] one can take the infinite limit K ? ? in this model and arrive at
the HDP. We will return to this infinite limit when we describe Gibbs sampling. In the following we
will use the same graphical model for finite and infinite versions of ISBNs.
1
We will often avoid writing the super-indices a, b when it is clear from the context, e.g. ?axa |?a = ?xa |?a .
2
J1
Wz
1
D1
N
D2
S z |i
Ix |z z
z ji1
x ji
1
J
J2
1 2
Wz
Sz
2
2
|j
z ji2
(a)
N
N
Wz
J2
Wz
Wz
2
Sz
2
N2
D
D1
Ix |z ,z '
S z |j
Ix |z
x ji
z j oi
1
x jw
(b)
J1
1
1
1
1
S z |j
1
1
z jw
D0
Sz
0
z j0
D2
|j
z jf2
Ix
2
|z 2
x jf2
(c)
Figure 2: Graphical representation for (a) BiHDP, (b) Mixed membership stochastic block model and (c) the
?multimedia? model.
One of the key features of the HDP is that topics are shared across all documents indexed by j. The
reason for this is the distribution ? : new states are ?invented? at this level and become available
to all other documents. In other words, there is a single state space for all copies of ?. One can
interpret j is an instantiation of a dummy, fully observed random variable ?. We could add this node
to the BN as a parent of z (since ? depends on it) and reinterpret the statement of sharing topics
as a fully connected transition matrix between states of ? to states of z. This idea can be extended
to a combination of fully observed parent variables and multiple unobserved parent variables, e.g.
? ? z2 , z3 , ?. Moreover, the child variables do not have to be observed either, so we can also replace
x ? z. In this fashion we can connect together multiple vertical stacks ? ? ? ? z where each
such module is part of a ?virtual-HDP? where the joint child states act as virtual data and the joint
parent states act as virtual document labels. Examples are given in Figs. 1a (infinite extension of a
Bayes net with IID data items) and 3a (infinite extension of Pachinko Allocation).
4
Inference
To simplify the presentation we will now restrict attention to a Bayesian network where all CPTs are
shared across all data-items (see Fig.1a). In this case data is unstructured, assumed IID and indexed
by a flat index n = i. Instead of going through the detailed derivation, which is an extension of the
derivation in [2] for HDP, we will describe the sampling process in the following.
There is a considerable body of empirical evidence which confirms that marginalizing out the variables ?, ? will result in improved inference (e.g. [6, 7]). In this collapsed space, we sample two sets
of variables alternatingly, {z} on the hand and {? } on the other. First, we focus on the latter given
z and notice that all ? are conditionally independent given z, x.
Sampling ? |(z, x): Given x, z we can compute count matrices2 Nzb |?b and Nxa |?a as
P
Nzb =k|?b =l = i I[zib = k ? ?bi = l] and similarly for Nxa |?a . Given these counts, for each value
of k, l, we now create the following vector: vkl = ?? k /(?? k + nk|l ? 1) with nk|l = [1, 2, .., Nk|l ].
We then draw a number Nk|l Bernoulli random variables with probability of success given by the
P t
elements of v, which we call3 stk|l and compute their sum across t: Sk|l =
t sk|l . This procedure is equivalent to running a Chinese restaurant process (CRP) with Nk|l customers and only
keeping track of how many tables become occupied.
We will denote it with Sk|l ? A[Nk|l , ?? k ]
P
after Antoniak [8]. Next we compute Sk = l Sk|l and sample ? from a Dirichlet distribution,
? ? D[?, S1 , .., Sk ]. Note that ? is a distribution over K a + 1 states, where we now denote with
K a the number of occupied states. If the state corresponding to ? is picked, a new state is created
and we increment K a ? K a + 1. If on the other hand a state becomes empty, we remove it from
2
Note that these can be also used to compute Rao-Blackwellised estimates of ? a ?, i.e. E[? zb |?b ] =
(? ? bz + Nzb |?b )/(?b + N?b ) and similarly for ?.
3
These variables are so called auxiliary variables to facilitate sampling ? .
b
3
N
Ix |z
D
Wz
1
1
S z |z
1
1
x ji
J3
J2
J1
D
2
j
Wz
2
Sz
2
D
3
2
|z 3 j
z ji2
z ji1
Wz
Sz
3
Ix |z
z ji3
xn
|j
D2
D1
N
3
S z |z
1
1
Sz
2
|z 3
zn2
zn1
(a)
2
D3
Sz
3
zn3
(b)
Figure 3: Graphical representation for (a) Pachinko Allocation and (b) Nested DP.
the list and we decrement K a ? K a ? 1. This will allow assignment variables to add or remove
states adaptively4 .
Sampling z|(? , x):
given by,
The conditional probability of all {zi , xi } variables jointly (for fixed i) is
P (xi , zi |z?i , x?i , ? , ?) =
Y
F (xai |xa?i , ?a?i )
a
Y ?b ? zib + Nz?i
b |?b
i
b
i
?b + N??ib
(1)
i
b
where Nz?i
b |?b is the number data-cases assigned to group zi for variable b and its parents assigned
i
i
to group ?bi , where we exclude data-case i from this count. Also,
R
Q
d?k P (xai |?k ) i0 \i:?a0 =k P (xai0 |?k ) P (?k )
a a
a
i
R
Q
F (xi |x?i , ??i = k) =
d?k i0 \i:?a0 =k P (xai0 |?k ) P (?k )
(2)
i
Importantly, equation 1 follows the structure of the original Bayes net, where each term has the
form of a conditional distribution P (zai |?ai ) and is based on sufficient statistics collected over all
the other data-cases. Hence, we can use the structure of the Bayes net to sample the assignment
variables jointly across the BN (for data-case i). The general technique that allows one to exploit
network structure is ?forward-filtering-backward-sampling? (FFBS) [3]. Assume for instance that
the network is a tree. In that case we first propagate information from the leaves to the root, computing the probabilities P (zb |{xb? }) as we go where ??? means that we compute a marginal conditioned
on ?downstream? evidence. When we reach the root we draw a sample from P (zroot |{xb }). Finally,
we work our way back to the leaves, conditioning on drawn samples (which summarize upstream
information) and using the marginal probabilities P (zb |{xb? }) cashed during the filtering phase to
represent downstream evidence. For networks with higher treewidth we can extend this technique
to junction trees. Alternatively, one can use cut-set sampling techniques [9].
5
ISBN for Structured Data
In section 2 we introduced an index n to label the known structure of the data. The simplest nontrivial
example is given by the HDP, where n = (ji) indexing e.g. documents and words. In this case the
CPT ? z|j is not shared across all data, but is specific to a document. Next consider Fig.1c where
n = (kji) is labelling for instance words (i) in chapters (j) in books (k). The first level CPT ? z|kj is
specific to chapters (and hence books) and is sampled from a Dirichlet distribution with mean given
4
We adopted the name ?infinite state Bayesian network? because the (K a + 1)th state actually represents an
infinite pool of indistinguishable states.
4
by a second level CPT ?z|k specific to books, which in turn is sampled from a Dirichlet distribution
with mean ? z , which finally is sampled from a Dirichlet prior with parameters ?. Sampling occurs
again in two phases: sampling ?, ? |x, z and z|?, ? , x while marginalizing out ?, ?.
To sample from ?, ? we compute counts Nu|m,jk which is the number of times words were assigned
in chapter j and book k to the joint state z = u, ? = m. We then work our way up the stack,
sampling new count arrays S, R as we go, and then down again sampling the CPTs (? , ?) using
these count arrays5 . Note that this is just one step of Gibbs sampling from P (? , ?|z, x) and does
not (unlike the other phase for z) generate an equilibrium sample from this conditional distribution.
X
X
?: su|jkm ? A[Nu|jkm , ??u|k ] ? Su|k =
su|jkm ? ru|k ? A[Su|k , ?? u ] ? Ru =
tu|k
j,m
k
?: ? u ? D[(?, Ru )] ? ?u|k ? D[?? u + Su|k ]
(3)
A similar procedure is defined for the priors of ? and extensions to deeper stacks are straightforward.
If all z variables carry the same index n, sampling zn given the hierarchical priors is very similar
to the FFBS procedure described in the previous section, except that the count arrays may carry
?ijk
a subset of the indices from n, e.g. Nz|?jk
. Since these counts are specific to a chapter they are
typically smaller resulting in a higher variance for the samples z. If two neighboring z variables
carry different subsets of n labels, e.g. node z0j in Fig.2c, the conditional distributions are harder to
compute. The general rule is to identify and remove all z0 variables that are impacted by changing
the value for z under consideration, e.g. {z1jw , ?w ? z2jf , ?f } in Fig.2c if we resample z0j . To
compute the conditional probability we set z = k and add the impacted variables z0 back into the
system, one-by-one in an arbitrary order and assigning them to their old values.
It is also instructive to place DP priors (instead of HDP priors) of the form D[?b /K b ] directly on ?
(skipping ? ). In taking the infinite limit the conditional distribution for existing states zb becomes
directly proportional to Nzb |?b (the ?b ? zb term is missing). This has the effect that a new state
z b = k that was discovered for some parent state ?b = l will not be available to other parent states,
simply because Nk|l0 = 0, l0 6= l. The result is that the state space forks into a tree structure as we
move down the Bayes net. When the network structure is a linear chain, this model is equivalent
to the ?nested-DP? introduced in [10] as a prior on tree-structures. The corresponding Bayes net
is depicted in Fig.3b. A chain of length 1 is of course just a Dirichlet process mixture model. A
DP prior is certainly appropriate for nodes zb with CPTs that do not depend on other parents or
additional labels, e.g. nodes z3 and z4 in Fig.1a. Interestingly, an HDP would also be possible
and would result in a different model. We will however follow the convention that we will use the
minimum depth necessary for modelling the structure of the data.
6
Examples
Example: HDP Perhaps the simplest example is an HDP itself, see Fig.1b. It consists of a single
topic node and a single observation node. If we make ? depend on the item index i, i.e. ?x|z,i , we
obtain the infinite version of the ?user rating profile? (URP) model [11]. If we make ? depend on j
instead and add a prior: ? x|z ? ?x|z,j , we obtain an ?HDP with random effects? [12] which has
the benefit that shared topics across documents can vary slightly relative to each other.
Example: Infinite State Chains The ?Pachinko allocation model? (PAM) [13] consists of a linear
chain of assignment variables with document specific transition probabilities, see Fig.3a. It was
proposed to model correlations between topics. The infinite version of this is clearly an example
of an ISBN. An equivalent Chinese restaurant process formulation was published in [14]. A slight
variation on this architecture was described in [15] (POM). Here, images are modeled as mixtures
over parts and parts were modeled as mixtures over visual words. Finally, a visual word is a distribution over features. POM is only subtly different from PAM (see Fig.3a) in that parts are not
image-specific distributions over words, and so the distribution ? z1 |z2 does not depend on j.
Example: BiHDP This model, depicted in Fig.2a has a data variable xji and two parent topic
variables z1ji and z2ji . One can think of j as the customer index and i as the product index (and no IID
repeated index). The value of x is the rating of that customer for that product. The hidden variables
5
Teh?s code npbayes-r21, (available from his web-site) does in fact implement this sampling process.
5
z1ji and z2ji represent product groups and customer groups. Every data entry is assigned to both a
customer group and a product group which together determine the factor from which we sample
the rating. Note that the difference between the assignment variables is that their corresponding
CPTs ? z1 ,j and ? z2 ,i depend on j and i respectively. Extensions are easily conceived. For instance,
instead of two modalities, we can model multiple modalities (e.g. customers, products, year). Also,
single topics can be generalized to hierarchies of topics, so every branch becomes a PAM. Note
that for unobserved xji values (not all products have been rated by all customers) the corresponding
zaji , zbji are ?dangling? and can be integrated out. The result is that we should skip that variable in
the Gibbs sampler.
Example: The Mixed-Membership Stochastic Block Model[16] This model is depicted in
Fig.2b. The main difference with HDP is that (like BiHDP) ? depends on two parent states zi?j
and zj?i by which we mean that item i has chosen topic zi?j to interact with item j and vice
versa. However, (unlike BiHDP) those topic states share a common distribution ?. Indices only
run over distinct pairs i > j. These features make the model suitable for modeling social interaction networks or protein-protein interaction networks. The hidden variables jointly label the type of
interaction that was used to generate ?matrix-element? xij .
Example: The Multimedia Model In the above examples we had a single observed variable in
the graphical model (repeated over ij). The model depicted in Fig.2c has two observed variables
and an assignment variable that is not repeated over items. We can think of the middle node z0j as
the class label for a web-page j. The left branch can then model words on the web-page while the
right branch can model visual features on the web-page. Since no sharing is required for z0j we used
a Dirichlet prior. The other variables have the usual HDP priors.
7
Experiments
To illustrate the ideas we implemented two models: BiHDP of Fig.2a and the ?probabilistic object
model? (POM), explained in the previous section.
Market Basket Data In this experiment we investigate the performance of BiHDP on a synthetic
market basket dataset. We used the IBM Almaden association and sequential patterns generator to
create this dataset [17]. This is a standard synthetic transaction dataset generator often used by the
association research community. The generated data consists of purchases from simulated groups
of customers who have similar buying habits. Similarity of buying habits refers to the fact that
customers within a group buy similar groups of items. For example, items like strawberries and
cream are likely to be in the same item group and thus are likely to be purchased together in the
same market basket. The following parameters were used to generate data for our experiments: 1M
transactions, 10K customers, 1K different items, 4 items per transaction on average, 4 item groups
per customer group on average, 50 market basket patterns, 50 customer patterns. Default values
were used for the remaining parameters.
The two assignment variables correspond to customers and items respectively. For a given pair of
customer and item groups, a binomial distribution was used to model the probability of a customer
group making a purchase from that item group. A collapsed Gibbs sampler was used to fit the model.
After 1000 epochs the system converged to 278 customer groups and 39 item factors. Fig.4 shows
the results. As can be seen, most item groups correspond directly to the hidden ground truth data.
The conclusion is that the model can learn successfully the hidden structure in the data.
Learning Visual Vocabularies LDA has also gained popularity to model images as collections of
features. The visual vocabulary is usually determined in a preprocessing step where k-means is run
to cluster features collected from the training set. In [15] a different approach was proposed in which
the visual word vocabulary was learned jointly with fitting the parameters of the model. This can
have the benefit that the vocabulary is better adapted to suit the needs of the model. Our extension of
their PLSA-based model is the infinite state model given by Fig.3a with 2 hidden variables (instead
of 3) and ? z1 |z2 independent of j. x is modeled as a Gaussian-distributed random variable over
feature values, z1 represents the word identity and z2 is the topic index.
We used the Harris interest-point detector and 21?21 patches centered on each interest point as input
to the algorithm. We normalized the patches to have zero mean. Next we reduced the dimensionality
of detections from 441 to 100 using PCA. The procedure described above generates a set of between
6
Learned:
Learned:
Learned:
Learned:
Learned:
Learned:
Learned:
Learned:
Learned:
Learned:
223, 619, 271, 448, 39, 390
364, 250, 718, 952, 326, 802
159, 563, 780, 995, 103, 216, 598, 72
227, 130, 862, 991, 904, 213
953, 175, 956, 385, 269, 14, 64
49, 657, 906, 604, 229
295, 129, 662, 922, 705, 210
886, 460, 471, 933, 544
489, 818, 927, 378, 64, 710
776, 224, 139, 379
True:
True:
True:
True:
True:
True:
True:
True:
True:
True:
223, 271, 448, 39, 427, 677
364, 718, 952, 326, 542, 98
159, 563, 780, 995, 103, 216, 542, 72
227, 130, 862, 991, 904, 213
953, 175, 956, 385, 269, 14, 956
49, 657, 906, 604, 229
295, 129, 662, 922, 705, 68
886, 460, 471, 933, 917
489, 818, 927, 378, 64, 247
776, 224, 139, 379
Figure 4: The 10 most popular item groups learned by the BiHDP model (left) compared to ground truth
item groups for market basket data (right). Learned items are ordered by decreasing popularity. Ground truth
items have no associated weight; therefore, they were ordered to facilitate comparison with the left row. Nonmatching items are shown in boldface.
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.2
0.4
0.6
0.8
0
1
0
0.2
0.4
0.6
0.8
1
Figure 5: Precision Recall curves for Caltech-4 dataset (left) and turntable dataset (right). Solid curve represents POM and dashed curve represents LDA.
50 and 400, 100-dimensional detections per image. Experiments were performed using the Caltech4 and ?turntable? datasets. For Caltech-4 we used 130 randomly sampled images from each of the 4
categories for training. LDA was fit using 500 visual words and 50 parts (which we found to give
the best results). The turntable database contains images of 15 toy objects. The objects were placed
on a turntable and photographed every 5 degrees. We have used angles 15, 20, 25, 35, 40, and 45
for training, and angles 10, 30, and 50 for testing. LDA used 15 topics and 200 visual words (which
again was optimal). LDA was then fitted to both datasets using Gibbs sampling. We initialized POM
with the output of LDA to make sure the comparison involved similar modes of the distribution.
The precision-recall curves for this dataset are shown in Fig.5. Images were labelled by choosing
the majority class across the 11 most similar retrieved images. Similarity was measured as the
probability of the query image given the part probabilities of the retrieved image.
These experiments show that ISBNs can be successfully implemented. We are not interested in
claiming superiority of ISBNs, but rather hope to convey that ISBNs are a convenient tool to design
models and to facilitate the search for the number of latent states.
8
Discussion
We have presented a unified framework to organize the fast growing class of ?topic models?. By
merging ideas from Bayes nets, nonparametric Bayesian statistics and topic models we have arrived
at a convenient framework to 1) extend existing models to infinite state spaces, 2) reason about
and design new models and 3) derive efficient inference algorithms that exploit the structure of the
underlying Bayes net.
Not every topic model naturally fits the suit of an ISBN. For instance, the infinite HMM [18] is like a
POM model with emission states, but with a single transition probability shared across time. When
marginalizing out ? this has the effect of coupling all z variables. An efficient sampler for this
model was introduced in [19]. Also, in [10, 20] models were studied where a word can be emitted at
7
any node corresponding to a topic variable z. We would need an extra switching variable to fit this
into the ISBN framework.
We are currently working towards a graphical interface where one can design ISBN models by
attaching together Hk DP modules and where the system will automatically perform the inference
necessary for the task at hand.
Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No.
0447903 and No. 0535278 and by ONR under Grant No. 00014-06-1-0734.
References
[1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[2] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. To appear in
Journal of the American Statistical Association, 2006.
[3] S. L. Scott. Bayesian methods for hidden Markov models, recursive computing in the 21st century.
volume 97, pages 337?351, 2002.
[4] T. Minka. Estimating a dirichlet distribution. Technical report, 2000.
[5] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the
American Statistical Association, 90:577?588, 1995.
[6] T.L. Griffiths and M. Steyvers. A probabilistic approach to semantic representation. In Proceedings of
the 24th Annual Conference of the Cognitive Science Society, 2002.
[7] Y.W. Teh, D. Newman, and M. Welling. A collapsed variational bayesian inference algorithm for latent
dirichlet allocation. In NIPS, volume 19, 2006.
[8] C.E. Antoniak. Mixtures of Dirichlet processes with applications to bayesian nonparametric problems.
The Annals of Statistics, 2:1152?1174, 1974.
[9] B. Bidyuk and R. Dechter. Cycle-cutset sampling for Bayesian networks. In Sixteenth Canadian Conf.
on AI, 2003.
[10] David Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. Hierarchical topic models
and the nested chinese restaurant process. In Neural Information Processing Systems 16, 2004.
[11] B. Marlin. Modeling user rating profiles for collaborative filtering. In Advances in Neural Information
Processing Systems 16. 2004.
[12] S. Kim and P. Smyth. Hierarchical dirichlet processes with random effects. In NIPS, volume 19, 2006.
[13] W. Li and A. McCallum. Pachinko allocation: Dag-structured mixture models of topic correlations. In
Proceedings of the 23rd international conference on Machine learning, pages 577?584, 2006.
[14] W. Li, A. McCallum, and D. Blei. Nonparametric bayes pachinko allocation. In UAI, 2007.
[15] D. Larlus and F. Jurie. Latent mixture vocabularies for object categorization. In British Machine Vision
Conference, 2006.
[16] E. Airoldi, D. Blei, E. Xing, and S. Fienberg. A latent mixed membership model for relational data. In
LinkKDD ?05: Proceedings of the 3rd international workshop on Link discovery, pages 82?89, 2005.
[17] R. Agrawal, T. Imielinski, and A. Swami. Mining associations between sets of items in massive databases.
In Proc. of the ACM-SIGMOD 1993 Intl Conf on Management of Data, 1993.
[18] M.J. Beal, Z. Ghahramani, and C.E. Rasmussen. The infinite hidden markov model. In NIPS, pages
577?584, 2001.
[19] Y. W. Teh, D. G?or?ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In
Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 11, 2007.
[20] W. Li D. Mimno and A. McCallum. Mixtures of hierarchical topics with pachinko allocation. In Proceedings of the 21st International Conference on Machine Learning, 2007.
8
| 3310 |@word middle:1 version:4 plsa:1 d2:3 confirms:1 seek:1 propagate:1 bn:3 solid:1 harder:1 carry:3 contains:1 document:13 interestingly:1 existing:3 z2:6 skipping:1 assigning:1 dechter:1 academia:1 j1:4 remove:3 bart:2 intelligence:1 leaf:2 item:23 mccallum:3 ji2:2 urp:1 blei:5 node:10 direct:1 become:2 consists:3 combine:1 fitting:1 introduce:2 market:5 xji:2 themselves:1 growing:1 uz:1 buying:2 decomposed:1 decreasing:1 automatically:1 linkkdd:1 cluttering:1 becomes:3 estimating:1 notation:2 moreover:1 underlying:1 r21:1 unified:1 unobserved:4 marlin:1 blackwellised:1 every:4 reinterpret:1 act:2 cutset:1 stick:1 grant:2 superiority:1 organize:1 appear:1 engineering:1 limit:3 switching:1 pam:3 nz:3 studied:1 range:1 bi:2 jurie:1 directed:1 testing:1 recursive:1 block:3 implement:1 procedure:4 habit:2 j0:1 bidyuk:1 area:1 empirical:1 convenient:2 word:18 donald:1 refers:1 griffith:2 protein:2 put:1 collapsed:4 context:1 writing:1 equivalent:3 customer:16 missing:1 straightforward:2 attention:1 go:2 independently:1 unstructured:2 rule:1 array:2 importantly:1 his:1 steyvers:1 century:1 variation:1 increment:1 annals:1 hierarchy:1 construction:1 user:2 smyth:1 massive:1 element:2 jk:3 cut:1 database:2 observed:9 invented:1 module:3 fork:1 electrical:1 bren:1 connected:2 cycle:1 depend:5 swami:1 subtly:1 upon:1 easily:1 joint:5 chapter:5 derivation:2 distinct:1 fast:1 describe:2 radboud:1 query:1 artificial:1 newman:1 choosing:1 encoded:1 drawing:1 statistic:4 think:3 jointly:4 itself:2 beal:2 agrawal:1 net:14 isbn:9 propose:1 interaction:3 product:6 j2:4 uci:1 tu:1 neighboring:1 zai:1 sixteenth:1 intuitive:1 normalize:1 parent:11 empty:1 cluster:1 intl:1 categorization:1 escobar:1 object:5 illustrate:2 derive:1 coupling:1 measured:1 ij:3 school:1 ji3:1 variables1:1 auxiliary:1 implemented:2 bzb:1 treewidth:1 skip:1 convention:1 stochastic:3 centered:1 virtual:3 material:1 assign:1 extension:7 considered:1 ic:1 ground:3 equilibrium:1 vary:1 resample:1 estimation:1 proc:1 label:7 currently:1 vice:1 create:2 successfully:2 tool:1 hope:1 zib:2 clearly:1 always:2 vkl:1 super:1 i3:1 gaussian:1 occupied:2 avoid:2 rather:1 l0:2 focus:1 emission:1 bernoulli:1 modelling:1 hk:1 kim:1 inference:7 membership:4 i0:2 typically:1 integrated:1 a0:2 hidden:8 zn1:1 going:1 i1:1 interested:2 almaden:1 development:2 marginal:2 ng:1 sampling:20 represents:4 purchase:2 report:1 simplify:2 modern:1 randomly:1 gamma:1 national:1 phase:3 n1:1 suit:2 detection:2 interest:2 investigate:1 mining:1 certainly:1 mixture:8 xb:3 chain:4 necessary:2 tree:6 indexed:4 old:1 initialized:1 causal:1 fitted:1 instance:5 industry:1 modeling:4 rao:1 zn:1 assignment:9 subset:3 entry:1 dependency:2 connect:1 synthetic:2 adaptively:1 npbayes:1 st:2 density:1 international:4 probabilistic:2 pool:1 michael:1 together:6 na:1 again:3 management:1 opposed:1 cognitive:1 book:5 american:2 conf:2 return:1 toy:1 li:3 exclude:1 depends:2 cpts:5 performed:2 root:2 picked:1 xing:1 bayes:15 collaborative:1 oi:1 variance:1 who:1 efficiently:1 correspond:2 identify:1 ji1:2 generalize:1 bayesian:14 unifies:1 iid:5 zan:1 published:1 alternatingly:1 converged:1 za:2 zn2:1 detector:1 reach:1 sharing:2 basket:5 involved:1 minka:1 naturally:1 associated:1 irvine:2 sampled:7 dataset:6 popular:1 recall:2 infers:1 dimensionality:1 organized:1 actually:1 back:2 higher:2 follow:1 impacted:2 improved:1 jw:2 formulation:1 done:1 xa:7 stage:1 crp:1 just:2 correlation:2 hand:3 working:1 web:4 su:5 mode:1 lda:7 perhaps:1 usa:2 facilitate:3 name:1 effect:4 normalized:1 true:10 hence:2 assigned:4 semantic:1 evgeniy:1 conditionally:1 indistinguishable:1 during:1 d4:1 generalized:1 plate:1 arrived:1 interface:1 image:13 variational:1 consideration:1 common:1 ji:7 conditioning:1 sabbatical:1 volume:4 extend:2 slight:1 association:5 interpret:2 refer:1 versa:1 gibbs:7 ai:3 mother:1 dag:1 rd:2 similarly:2 z4:1 had:1 similarity:2 add:4 recent:2 retrieved:2 axa:3 onr:1 success:1 joshua:1 caltech:3 seen:1 minimum:1 additional:1 determine:1 paradigm:1 dashed:1 branch:3 multiple:3 infer:3 d0:1 technical:1 biophysics:1 j3:1 vision:1 bz:1 represent:2 achieved:1 appointment:1 modality:2 extra:1 unlike:2 sure:1 thing:1 cream:1 leveraging:1 incorporates:1 jordan:3 emitted:1 canadian:1 identically:2 restaurant:3 zi:7 fit:4 architecture:1 restrict:1 idea:4 pca:1 bridging:1 cpt:4 cornerstone:1 clear:1 detailed:1 netherlands:1 z0j:4 nonparametric:5 turntable:4 tenenbaum:1 category:1 simplest:3 reduced:1 generate:3 xij:1 dangling:1 zj:1 notice:1 zroot:1 hdps:3 dummy:1 track:1 conceived:1 per:3 popularity:2 discrete:3 write:1 group:20 key:1 drawn:2 d3:2 changing:1 pom:6 backward:3 downstream:2 sum:1 year:1 run:2 angle:2 arrive:1 family:1 place:1 patch:2 draw:2 kji:3 annual:1 nontrivial:1 adapted:1 flat:1 generates:1 aspect:1 bns:1 photographed:1 structured:5 combination:1 remain:1 across:8 smaller:1 slightly:1 ur:1 matrices2:1 larlus:1 making:1 s1:1 explained:1 indexing:1 fienberg:1 equation:1 discus:1 turn:2 count:8 xi2:1 adopted:1 junction:3 available:4 hierarchical:7 appropriate:1 buffet:1 original:1 thomas:1 denotes:1 dirichlet:15 include:1 porteous:1 running:1 graphical:6 remaining:1 binomial:1 exploit:2 sigmod:1 ghahramani:2 chinese:3 society:1 purchased:1 move:1 occurs:1 usual:2 dp:6 separate:1 link:1 simulated:1 majority:1 hmm:1 strawberry:1 topic:28 collected:2 reason:2 boldface:1 hdp:16 ru:3 length:1 index:18 ffbs:2 z3:2 modeled:3 code:1 statement:1 claiming:1 nijmegen:1 design:3 perform:1 teh:4 vertical:2 observation:3 datasets:2 markov:2 finite:2 extended:2 zi1:1 looking:1 relational:1 discovered:1 stack:3 arbitrary:2 community:1 rating:4 introduced:3 david:1 pair:2 required:1 nonmatching:1 z1:5 california:2 learned:13 nu:2 nip:3 bar:1 usually:1 pattern:3 scott:1 summarize:1 max:1 wz:12 suitable:1 technology:1 rated:1 created:1 kj:2 text:1 prior:13 epoch:1 acknowledgement:1 discovery:1 marginalizing:3 relative:1 fully:3 mixed:4 filtering:5 allocation:9 proportional:1 generator:2 foundation:1 degree:1 sufficient:1 share:1 ibm:1 row:1 course:1 placed:1 supported:1 copy:2 keeping:1 rasmussen:1 allow:1 deeper:2 institute:1 wide:1 taking:1 attaching:1 distributed:3 mimno:1 benefit:2 curve:4 depth:1 xn:1 stand:1 transition:3 default:1 pachinko:7 forward:3 collection:2 vocabulary:5 replicated:1 preprocessing:1 welling:3 social:1 transaction:3 keep:1 sz:10 instantiation:1 xai:2 buy:1 uai:1 assumed:1 xi:3 alternatively:1 search:1 latent:9 sk:6 table:2 learn:2 ca:1 interact:1 upstream:1 domain:2 main:1 decrement:1 profile:2 n2:2 child:2 repeated:3 convey:1 body:1 fig:19 referred:1 site:1 west:1 fashion:1 precision:2 exponential:1 ib:1 breaking:1 ix:10 ian:1 down:2 z0:2 british:1 specific:7 list:1 evidence:3 workshop:1 sequential:1 merging:1 gained:1 airoldi:1 labelling:3 conditioned:1 nk:7 gap:1 suited:1 depicted:4 sophistication:1 antoniak:2 simply:1 likely:2 visual:8 ordered:2 jkm:3 nested:5 truth:3 harris:1 acm:1 conditional:8 viewed:2 goal:1 presentation:1 identity:1 towards:1 labelled:3 shared:5 replace:1 considerable:1 stk:1 infinite:20 except:1 determined:1 sampler:5 zb:9 multimedia:2 called:1 ijk:2 latter:1 indian:1 dept:2 d1:3 instructive:1 |
2,549 | 3,311 | Hippocampal Contributions to Control:
The Third Way
M?at?e Lengyel
Collegium Budapest Institute for Advanced Study
2 Szenth?aroms?ag u, Budapest, H-1014, Hungary
and
Computational & Biological Learning Lab
Cambridge University Engineering Department
Trumpington Street, Cambridge CB2 1PZ, UK
lmate@gatsby.ucl.ac.uk
Peter Dayan
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, UK
dayan@gatsby.ucl.ac.uk
Abstract
Recent experimental studies have focused on the specialization of different neural
structures for different types of instrumental behavior. Recent theoretical work
has provided normative accounts for why there should be more than one control
system, and how the output of different controllers can be integrated. Two particlar controllers have been identified, one associated with a forward model and
the prefrontal cortex and a second associated with computationally simpler, habitual, actor-critic methods and part of the striatum. We argue here for the normative
appropriateness of an additional, but so far marginalized control system, associated with episodic memory, and involving the hippocampus and medial temporal
cortices. We analyze in depth a class of simple environments to show that episodic
control should be useful in a range of cases characterized by complexity and inferential noise, and most particularly at the very early stages of learning, long
before habitization has set in. We interpret data on the transfer of control from the
hippocampus to the striatum in the light of this hypothesis.
1
Introduction
What use is an episodic memory? It might seem that the possibility of a fulminant recreation of a
former experience plays a critical role in enabling us to act appropriately in the world [1]. However,
why should it be better to act on the basis of the recollection of single happenings, rather than
the seemingly normative use of accumulated statistics from multiple events? The task of building
such a statistical model is normally the dominion of semantic memory [2], the other main form of
declarative memory. Issues of this kind are frequently discussed under the rubric of multiple memory
systems [3, 4]; here we consider it from a normative viewpoint in which memories are directly used
for control.
Our answer to the initial question is the computational challenge of using a semantic memory as a
forward model in sequential decision tasks in which many actions must be taken before a goal is
reached [5]. Forward and backward search in the tree of actions and consequent states (ie modelbased reinforcement learning [6]) in such domains impose crippling demands on working memory
1
(to store partial evaluations) and it may not even be possible to expand out the tree in reasonable
times. If we think of the inevitable resulting errors in evaluation as a form of computational noise
or uncertainty, then the use of the semantic memory for control will be expected to be subject to
substantial error. The main task for this paper is to explore and understand the circumstances under
which episodic control, although seemingly less efficient in its use of experience, should be expected
to be more accurate, and therefore be evident both psychologically and neurally.
This argument about episodic control exactly parallels one recently made for habitual or cached
control [5]. Model-free reinforcement learning methods, such as Q-learning [7] are computationally
trivial (and therefore accurate) at the time of use, since they learn state-value functions or stateaction-value functions that cache the results of the expensive and difficult search. However, modelfree methods learn through a form of bootstrapping, which is known to be inefficient in the use of
experience. It is therefore optimal to employ cached control rather than model-based control only
after sufficient experience, when the inaccuracy of the former over learning is outweighed by the
computational noise induced in using the latter. The exact tradeoff depends on the prior statistics
over the possible tasks.
We will show that in general, just as model-free control is better than model-based control after
substantial experience, episodic control is better than model-based control after only very limited
experience. For some classes of environments, these two other controllers significantly squeeze the
domain of optimal use of semantic control.
This analysis is purely computational. However, it has psychological and neural implications and
associations. It was argued [5] that the transition from model-based to model-free control explains
a wealth of psychological observations about the transition over the course of learning from goaldirected control (which is considered to be model-based) to habitual control (which is model-free).
In turn, this is associated with an apparent functional segregation between the dorsolateral prefrontal
cortex and dorsomedial striatum, implementing model-based control, and the dorsolateral striatum
(and its neuromodulatory inputs), implementing model-free control. Exactly how the uncertainties
associated with these two types of control are calculated is not clear, although it is known that the
prelimbic and infralimbic cortices are somehow involved in arbitration. The psychological construct
for episodic control is obvious; its neural realization is likely to be the hippocampus and medial temporal cortical regions. How arbitration might work for this third controller is also not clear, although
there have been suggestions on how uncertainty may be represented neurally in the hippocampus
[8]. There is also evidence for the transfer of control from hippocampal to striatal structures over
the course of learning [9, 10] suggesting that arbitration might happen, but unfortunately, in these
studies, the possibility of an additional step via dorsolateral prefrontal cortex was not fully tested.
In this paper, we explore the nature and (f)utility of episodic control. Section 2 describes the
simple tree-structured Markov decision problems that we use to illustrate and quantitate our arguments. Section 3 provides a detailed, albeit approximate, analysis of uncertainty and learning in
these environments. Finally, section 4 uses these analytical methods and simulations to study the
episodic/forward model tradeoff.
2
Paradigm for analysis
We seek to analyse computational and statistical trade-offs that arise in choosing actions that maximize long-term rewards in sequential decision making problems. The trade-offs originate in uncertainties associated with learning and inference. We characterize these tasks as Markov decision
processes (MDPs) [6] whose transition and reward structure are initially unknown by the subject,
but are drawn from a parameterized prior that is known.
The key question is how well different possible control strategies can perform given this prior and
a measured amount of experience. Like [11], we simplify exploration using a form of parallel sampling model in order to focus on the ability of controllers to exploit knowledge extracted about an
environment. Performance is naturally measured using the average reward that would be collected
in a trial; this average is then itself averaged over draws of the MDP and the stochasticity associated
with the exploratory actions. We analyse three controllers: a model-based controller without computational noise, which provides a theoretical upper limit on performance, a realistic model-based
2
A
B
3
Performance
2.5
actions
non!terminal state
4
3
2
1
2
1.5
1
numerical simulations
analytical approximations
0.5
0
0.5
1
transition probabilities
0
0
2
4
6
Learning time
8
10
0.6
0.3
!10
0
reward
10
0
probability
terminal state
Figure 1: A, An example tree-structured MDP, with depth D = 2, branching factor B = 3, and
A = 4 available actions in each non-terminal state. The horizontal stacked bars in the boxes of the
left and middle column show the transition probabilities for different actions at non-terminal states,
color coded by the successor states to which they lead (matching the color of the corresponding
arrows). Transition probability distributions are iid. according to a Dirichlet distribution whose
parameters are all 1. Gaussians in the right column show the reward distributions at terminal states.
Each has unit variance and a mean which is drawn iid. from a normal distribution of mean ?r? = 0
and standard deviation ?r? = 5. All parameters in later figures are the same, unless otherwise noted.
B, Validating the analytical approximations by numerical simulations (A = 3).
controller with computational noise that we regard as the model of semantic memory-based control,
and an ?episodic controller?.
We concentrate on a simple subset of MDPs, namely ?tree-structured MDPs? (tMDPs), which are
illustrated in Figure 1A (and defined formally in the Supporting Material). We expect the qualitative
characteristics of our findings to apply for general MDPs; however, we used tMDPs since they
represent a first-order, analytically tractable, approximation of the general problem presented by any
MDP at a given decision point if it is unfolded in time (ie a decision tree with finite time-horizon).
Actions lead to further states (and potentially rewards), from where further possible actions and
thus states become available, and so on. The key difference is that in a general MDP, a state can
be revisited several times even within the same episode, which is impossible in a tMDP. Thus, our
approach neglects correlations between values of future states. This is formally correct in the limit
of infinitely diluted MDPs, but is otherwise just an approximation.
3
The model-based controller
In our paradigm, the task for the model-based controllers is to use the data from the exploratory
trials to work out posterior distributions over the unknown transition and reward structure of the
tMDP, and then report the best action at each state. It is well known that actually doing this is radically intractable. However, to understand the tradeoffs between different controllers, we only need
to analyze the expected return from doing so, averaging over all the random quantities. One of the
technical contributions of this work is the set of analytically- and empirically-justified approximations to those averages (which are presented in the Supplementary Material), based on the assumed
knowledge of the parameters governing the generation of the tMDP, and as a function of the amount
of exploratory experience.
We proceed in three stages. First, we consider the model-based controller in the case that it has
experienced so many samples that the parameters of the tMDP are known exactly. This provides an
(approximate) upper bound on the expected performance of any controller. Second, we approximate
3
the impact of incomplete exploration by corrupting the controller by an aliquot of noise whose
magnitude is determined by the parameters of the problem. Finally, we approximate the additionally
deleterious effect of limited computational resources by adding an assumed induced bias and extra
variance.
The first step is to calculate the asymptotic performance when infinitely many data have been collected. In this limit, transition probabilities and reward distributions can be treated as known quantities. Critical to our analysis is that the independence and symmetry properties of regular tMDPs
imply that we mostly need only analyze a single ?sub-treelet? of the tree (one non-terminal state and
its successor states), from which the results generalise to the whole tree by recursion. In the case
of the asymptotic analysis, this recursive formulation turns out to allow for a closed-form solution
for the mean ? and variance ? 2 of an approximate Gaussian distribution characterizing the average
value of one full tree traversal starting from the root node:
D/2
? = ?r? +
1 ? ?2
1?
1/2
?2
?1 ?r?
2
?2 = ?D
2 ?r?
(1)
where ?r? and ?r2? are the mean and variance of the normal distribution from which the means of the
reward distributions at the terminal states are drawn, and 0 ? ?1 , ?2 ? 1 are constants that depend
on the other parameters of the tMDP. This calculation depends on characterizing order statistics
of multivariate Gaussian distributions which are equicorrelated (all the off-diagonal terms of the
covariance matrix are the same) [12]. Equation 1 is actually an interesting result in and of itself ?
it indicates the extent to which the controller can take advantage of the variability ? ? ?r? ? ?r? in
boosting its expected return from the root node as a function of the depth of the tree.
The second step is to observe that we expect the benefits of episodic control to be most apparent
given very limited exploratory experience. To make analytical progress, we are forced to make the
significant assumption that the effects of this can be modeled by assuming that the controller does
not have access to the true values of actions, but only to ?noisy? versions. This ?noise? comes from
the fact that computing the values of different actions is based on estimates of transition probability
and reward distributions. These estimates are inherently stochastic themselves, as they are based on
stochastic experience. We have been able to show that the form of the resulting ?noise? in the action
values can have the effect of scaling down the true values of actions at states by a factor ?1 and
adding extra noise ?2 . Although we were unable to find a closed-form solution for the effects of ?1
and ?2 on the performance of the controller, a recursive analytical formulation, though involved, is
still possible (see Supporting Material).
Figure 1B shows the learning curve for the model-based controller computed using our analytical
predictions (blue line) and using exhaustive numerical simulations (red line, average performance
in 100 sample tMDPs, with the learning process rerun 100 times in each). The inaccuracies entailed
by our approximations are tolerable (also for other parameters; simulations not shown), and so from
this point we use those to analyse the performance of the optimal, model-based controller.
The dark blue solid curve in figure 2A (labelled ?2 = 0) shows the performance of model-based
control as a function of the number of exploration samples (the equivalent of the dark blue curve in
figure 1B, but for A = 4 rather than A = 3). For comparison, the dashed line shows the asymptotic
expected value. The slight decrease in the approximate value arises because the approximations
become slightly looser as the noise gets less; however, once again we have been able to show (simulations not shown) that our analysis is highly accurate compared with extensive actual samples.
The final step is to model the effects of the computational complexity of the model-based controller
on performance arising from the severe demands it places on such facets as working memory. These
necessitate pruning (ie ignoring parts of the decision tree), or sub-sampling, or some other such
approximation. We treat the effects of all approximations by forcing the controller to have access
to only noisy versions of the (exploration-limited) action values. Just as for incomplete exploration,
we model the noise as a combination of downscaling the true action values by a parameter ?1 and
adding excess variability ?2 . Note that whereas the terms ?1 , ?2 characterizing the effects of learning
are determined by the number of samples; ?1 , ?2 are set by hand to capture the assumed effects
on inference of the computational complexity. The asymptotic values of the curves in figure 2A
for various values of ?2 (for all of them, ?1 = 1) demonstrate the effects of inferential noise on
performance.
4
A
B
3.5
0.9
Relative performance
3
Performance
2.5
2
!2=0 ! asymptotic
1.5
!2=0
1
!2=1
!2=2
0.5
!2=3
0
0
20
40
60
Learning time
80
0.8
0.7
0.6
0.5
!2=2
0.3
! =3
0.2
!2=2, D=12
0.1
0
100
!2=1
0.4
2
20
40
60
Learning time
80
100
Figure 2:
A, Learning curves for the model-based controller at different levels of computational noise: ?1 = 1,
?2 is increased from 0 to 3. The approximations used for computing these curves are less accurate in
the low-noise limit, hence the paradoxical slight decrease in the performance of the perfect controller
(without noise) at the end of learning. The dashed line shows the asymptotic approximation which
is more accurate in this limit, demonstrating that the inaccuracy of the experience-dependent approximation is not disastrous. B, Performance of noisy controllers normalized by that of the perfect
controller in the same environment at the same amount of experience. The brown line corresponds
to a more difficult environment with greater depth. Note that ?learning time? is measured by the
number of times every state-action pair has been sampled. Thus decreased performance shown in
the more complex environment is not due to the increased sparsity of experience.
So far, we have separately considered the effects of computational noise and uncertainty due to limited experience. In reality, both factors affect the model-based controller. The full plots in figure 2A,
B show the interaction of these two factors (figure 2B shows the same data as figure 2A, but scaled
to the performance of the noise-free controller for the given amount of experience). Computational
noise not only makes the asymptotic performance worse, by simply down-scaling average rewards,
but it also makes learning effectively slower. This is because the adverse effects of computational
noise depend on the differences between the values of possible actions. If these values appear to
be widely different, then computational noise will still preserve their order, and thus the one that
is truly best is still likely to be chosen. However, if action values appear roughly the same, then a
little noise can easily change their ordering and make the controller choose a suboptimal one. Little
experience only licenses small apparent differences between values, and this boosts the corrupting
effect of the inferential noise. Given more experience, the controller increasingly learns to make
distinctions between different actions that looked the same a priori.
Thus, while earlier work suggested that model-based control will be superior at the limit of few
exploratory samples due to the unsurpassable data-efficiency of optimal statistical inference [5], we
show here that in the really low data limit another factor cripples its performance: the amplified
influence of computational noise. How much experience constitutes ?little? and how much noise
counts as ?much? is of course relative to the complexity of the environment.
4
Episodic control
If model-based control is indeed crippled by computational noise given limited exploration, could
there be an effective alternative? Although outside the scope of our formal analysis, this is particularly important given the ubiquity of non-stationary environments [13], for which the effects of
continual change bound the effective number of exploratory samples. That the cache-based or habitual controller is even worse in this limit (since it learns by bootstrapping) was a main rationale
for the uncertainty-based account of the transfer from goal-directed to habitual control suggested by
Daw et al [5]. Thus the habitual controller cannot step into the breach.
It is here that we expect episodic control to be most useful. Intuitively, if a subject has experienced a
complex environment just a few times, and found a sequence of actions that works reasonably well,
then, provided that exploitation is at a premium over exploration, it seems obvious for the subject
5
A
B
4
3.5
2
1.5
1
0.5
5
10
Learning time
15
2.5
Performance
2.5
2
1.5
2
1.5
1
1
0.5
0.5
0
0
model!based perfect
model!based nois:
episodic
3
2.5
Performance
Performance
3.5
model!based perfect
model!based nois:
episodic
3
3
0
0
C
3.5
model!based perfect
model!based nois;
episodic
5
10
Learning time
15
0
0
5
10
15
Learning time
Figure 3: Episodic vs. model-based control. Solid red line shows the performance of noisy modelbased control (?2 = 2), blue line shows that of episodic control. Dashed red line shows the case of
perfect model-based control which constitutes the best performance that could possibly be achieved.
The branching factor of the environment increased from B = 2 (A), B = 3 (B) to B = 4 (C).
just to repeat exactly those actions, rather than trying to build and use a complex model. This act of
replaying a particular sequence of events from the past is exactly an instance of episodic control.
More specifically, we employ an extremely simple model of episodic memory, and assume that each
time the subject experiences a reward that is considered large enough (larger than expected a priori)
it stores the specific sequence of state-action pairs leading up to this reward, and tries to follow
such a sequence whenever it stumbles upon a state included in it. If multiple successful sequences
are available for the same state, the one that yielded maximal reward is followed. We expect such
a strategy to be useful in the low data limit because, unlike in cache-based control, there is no
issue of bootstrapping and temporal credit assignment, and unlike in model-based control, there is
no exhaustive tree-search involved in action selection. Of course its advantages will be ultimately
counteracted by the haphazardness of using single samples that are ?adequate?, but by that time the
other controllers can take over.
Although we expect our approximate analytical methods to provide some insight into its characteristics, we have so far only been able to use simulations to study the episodic controller in the usual
class of tMDPs. Comparing the blue (episodic) and red (model-based, but noisy; ?2 = 2) curves, in
figure 3A-C, it is apparent that episodic control indeed outperforms noisy model-based control in the
low data limit. The dashed curves show the performance of the idealized model-based controller that
is noise-free. This emphasizes the arbitrariness of our choice of noise level ? the greater the noise,
the longer the dominance of episodic control. However, in complicated environments, even very
small amounts of noise are near catastrophic for model-based control (see brown line in Fig. 2B),
and so this issue is not nugatory.
The progression of the learning curves in figure 3A-C make the same point a different way. They
show what happens as the complexity of the environment is increased by increasing the branching
factor. At the same level of computational noise, episodic control supplants model-based control for
increasing volumes of exploratory samples. We expect that the same is true if the complexity of the
environment is increased by increasing the depth of the tree (D) instead, or as well.
Figure 3A-C also makes the point that the asymptotic performance of the episodic controller is
rather poor, and is barely improved by extra learning. A smarter episodic strategy, perhaps involving
reconsolidation to eliminate unfortunate sample trajectories, might perform more competently.
5
Discussion
An episodic controller operates by remembering for each state the single action that led to the best
outcome so far observed. Here, we studied the nature and benefits of episodic control. This controller is statistically inefficient for solving Markov decision problems compared with the normative
strategy of building a statistical forward model of the transitions and outcomes, and searching for
the optimal action. However, episodic control is computationally very straightforward, and therefore does not suffer from any excess uncertainty or noise arising from the severe calculational and
search complexities of the forward model. This implies that it can best forward model control under
various circumstances.
6
To explore this, we first characterized a class of regular tree-structured Markov decision problems
using four critical parameters ? the depth of the tree; the fan-out from each state; the number of
actions per state, and the characteristic (Dirichlet) statistics of the transitions consequent on each
action. We then used theoretical and empirical methods to analyze the statistical structure of control
based on a forward model in the face of limited data. We showed that this control can readily be
outperformed by an episodic controller which does not suffer from computational inaccuracy, at
least in the particular limits of high task complexity and significant inferential noise in the modelbased controller. We also showed how the noise in the latter has a particularly pernicious effect
on the course of learning, corrupting the choice between actions whose values appear, because of
limited experience, closer than they actually are.
Our results are obviously partial. In particular, the constraint of using a regular tree-structured MDP
is much too severe ? given the intuition from the results above, we can now consider more conventional MDPs that better model the classes of experiments that have probed the transfer of control.
Further, it would be important to consider models of exploration more general than the parallel sampler, which provides homogeneous sampling of state-action pairs. The particular challenge is when
exploration and exploitation are coupled, as then all the samples become interdependent in a gordian
manner.
Our analysis paralleled that of [5], who showed that the noisy forward-model controller is also
beaten by a cached (actor-critic-like) controller in the opposite limit of substantial experience in an
environment. The cached controller is also computationally straightforward, but relies on a completely different structure of learning and inference.
In psychological terms, the episodic controller is best thought of as being goal-directed, since the
ultimate outcome forms part of the episode that is recalled. Unfortunately, this makes it difficult
to distinguish behaviorally from goal-directed control resulting from the forward model. In neural
terms, the episodic controller is likely to rely on the very well investigated systems involved in
episodic memory, namely the hippocampus and medial temporal cortices. Importantly, there is direct
evidence of the transfer of control from hippocampal to striatal structures over the course of learning
[9, 10], and there is some evidence that episodic and habitual control can be simultaneously active.
Unfortunately, there are few data [14] on structures that might control the competition or transfer
process, and no test as to whether there is an intermediate phase in which prefrontal mechanisms
instantiating the forward model might be dominant. Predictions from our work associated with this
are the most ripe for experimental test.
This paper is an extended answer to the question of the computational benefit of episodic memory,
which, crudely speaking, stores particular samples, over semantic memory, which stores probability
distributions. It is, of course, not the only answer ? for instance, subjects that cache are obviously
better off remembering exactly where in particular they stored food [15] than having to search all
the places that are likely under a (semantic) prior. Equally, in game theoretic interactions between
competitors, Nash equilibria are typically stochastic, and therefore seemingly excellent candidates
for control based on a semantic memory. However, taking advantage of the flaws in an opponent
require exactly remembering how its actions deviate from stationary statistics, for which an episodic
memory is a most useful tool [16].
One potential caveat to our results is that methods associated with memory-based reasoning [17]
(such as kernel density estimation) create a semantic memory from an episodic one, for instance
by recalling all episodes close to a cue, and weighting them by a statistically-appropriate measure
of their distance. This form of semantic memory can be seen as arising without any consolidation
process whatsoever. However, although this method has its computational attractions, it is psychologically implausible since phenomena such as priming make it extremely difficult to recall multiple
closely related samples from an episodic memory, let alone to do so in a statistically unbiased way
(but see [18]).
In sum, we have provided a normative justification from the perspective of appropriate control for
the episodic component of a multiple memory system. Pressing from a theoretical perspective is a
richer understanding of the integration beyond mere competition, of the information residing in, and
the decisions made by, all the systems involved in choice.
7
Acknowledgements
We are very grateful to Nathaniel Daw and Quentin Huys for helpful discussions. Funding was from
the Gatsby Charitable Foundation (ML and PD), and the EU Framework 6 (IST-FET 1940) (ML).
References
[1] Dudai, Y. & Carruthers, M. The Janus face of Mnemosyne. Nature 434, 567 (2005).
[2] K?ali, S. & Dayan, P. Off-line replay maintains declarative memories in a model of hippocampalneocortical interactions. Nat. Neurosci. 7, 286?294 (2004).
[3] McClelland, J.L., McNaughton, B.L. & O?Reilly, R.C. Why there are complementary learning systems
in the hippocampus and neocortex: insights from the successes and failures of connectionist models of
learning and memory. Psychol. Rev. 102, 419?457 (1995).
[4] White, N.M. & McDonald, R.J. Multiple parallel memory systems in the brain of the rat. Neurobiol Learn
Mem 77, 125?184 (2002).
[5] Daw, N.D., Niv, Y. & Dayan, P. Uncertainty-based competition between prefrontal and dorsolateral
striatal systems for behavioral control. Nat. Neurosci. 8, 1704?1711 (2005).
[6] Sutton, R.S. & Barto, A.G. Reinforcement Learning (MIT Press, 1998).
[7] Watkins, C.J.C.H. Learning from Delayed Rewards. PhD thesis, Cambridge University, (1989).
[8] Lengyel, M. & Dayan, P. Uncertainty, phase and oscillatory hippocampal recall. in Advances in Neural
Information Processing Systems 19 (eds. Sch?olkopf, B., Platt, J. & Hoffman, T.) 833?840 (MIT Press,
Cambridge, MA, 2007).
[9] Packard, M.G. & McGaugh, J.L. Double dissociation of fornix and caudate nucleus lesions on acquisition
of two water maze tasks: further evidence for multiple memory systems. Behav. Neurosci. 106, 439?446
(1992).
[10] Poldrack, R.A. et al. Interactive memory systems in the human brain. Nature 414, 546?550 (2001).
[11] Kearns, M. & Singh, S. Finite-sample convergence rates for Q-learning and indirect algorithms. in
Advances in Neural Information Processing Systems Vol. 11 (eds. Kearns, M.S., Solla, S.A. & Cohn,
D.A.), Vol. 11, 996?1002 (MIT Press, Cambridge, MA, 1999).
[12] Owen, D.B. & Steck, G.P. Moments of order statistics from the equicorrelated multivariate normal distribution. Ann Math Stat 33, 1286?1291 (1962).
[13] Kording, K.P., Tenenbaum, J.B. & Shadmehr, R. The dynamics of memory as a consequence of optimal
adaptation to a changing body. Nat. Neurosci. 10, 779?786 (2007).
[14] Poldrack, R.A. & Rodriguez, P. How do memory systems interact? Evidence from human classification
learning. Neurobiol Learn Mem 82, 324?332 (2004).
[15] Clayton, N.S. & Dickinson, A. Episodic-like memory during cache recovery by scrub jays. Nature 395,
272?274 (1998).
[16] Clayton, N.S., Dally, J.M. & Emery, N.J. Social cognition by food-caching corvids. the western scrub-jay
as a natural psychologist. Philos. Trans. R. Soc. Lond. B Biol. Sci. 362, 507?522 (2007).
[17] Stanfill, C. & Waltz, D. Toward memory-based reasoning. Communications of the ACM 29, 1213?1228
(1986).
[18] Hintzman, D.L. MINERVA 2: A simulation model of human memory. Behav Res Methods Instrum
Comput 16, 96?101 (1984).
8
| 3311 |@word trial:2 exploitation:2 version:2 middle:1 instrumental:1 hippocampus:6 seems:1 steck:1 simulation:8 seek:1 covariance:1 solid:2 moment:1 initial:1 past:1 outperforms:1 comparing:1 crippled:1 must:1 readily:1 realistic:1 happen:1 numerical:3 plot:1 medial:3 v:1 stationary:2 cue:1 alone:1 caveat:1 provides:4 boosting:1 revisited:1 node:2 math:1 simpler:1 goaldirected:1 become:3 direct:1 qualitative:1 behavioral:1 manner:1 indeed:2 expected:7 roughly:1 themselves:1 frequently:1 behavior:1 brain:2 terminal:7 unfolded:1 actual:1 little:3 cache:5 food:2 increasing:3 provided:3 competently:1 what:2 kind:1 neurobiol:2 whatsoever:1 ag:1 finding:1 bootstrapping:3 temporal:4 every:1 act:3 continual:1 stateaction:1 interactive:1 exactly:7 scaled:1 uk:4 control:61 unit:2 normally:1 platt:1 appear:3 before:2 engineering:1 treat:1 limit:12 consequence:1 striatum:4 sutton:1 might:6 studied:1 limited:8 downscaling:1 range:1 statistically:3 averaged:1 huys:1 directed:3 recursive:2 cb2:1 episodic:41 empirical:1 significantly:1 thought:1 inferential:4 matching:1 reilly:1 regular:3 get:1 cannot:1 close:1 selection:1 impossible:1 influence:1 equivalent:1 conventional:1 straightforward:2 starting:1 focused:1 recovery:1 insight:2 attraction:1 importantly:1 quentin:1 searching:1 exploratory:7 justification:1 mcnaughton:1 play:1 exact:1 dickinson:1 homogeneous:1 us:1 hypothesis:1 expensive:1 particularly:3 observed:1 role:1 capture:1 calculate:1 region:1 episode:3 ordering:1 trade:2 decrease:2 eu:1 solla:1 substantial:3 intuition:1 environment:15 nash:1 complexity:8 pd:1 reward:15 traversal:1 dynamic:1 ultimately:1 depend:2 solving:1 grateful:1 singh:1 ali:1 purely:1 upon:1 efficiency:1 basis:1 completely:1 easily:1 indirect:1 represented:1 various:2 stacked:1 forced:1 effective:2 london:1 choosing:1 outside:1 exhaustive:2 outcome:3 apparent:4 whose:4 supplementary:1 widely:1 larger:1 richer:1 tested:1 otherwise:2 ability:1 statistic:6 think:1 analyse:3 itself:2 noisy:7 final:1 seemingly:3 obviously:2 advantage:3 sequence:5 pressing:1 analytical:7 ucl:3 interaction:3 maximal:1 adaptation:1 budapest:2 hungary:1 realization:1 amplified:1 competition:3 olkopf:1 squeeze:1 convergence:1 double:1 cached:4 perfect:6 emery:1 diluted:1 illustrate:1 ac:2 stat:1 measured:3 progress:1 soc:1 come:1 implies:1 appropriateness:1 concentrate:1 closely:1 correct:1 stochastic:3 exploration:9 human:3 successor:2 material:3 implementing:2 explains:1 argued:1 require:1 really:1 niv:1 biological:1 considered:3 credit:1 normal:3 residing:1 equilibrium:1 scope:1 cognition:1 early:1 estimation:1 outperformed:1 create:1 tool:1 hoffman:1 offs:2 mit:3 behaviorally:1 gaussian:2 rather:5 caching:1 barto:1 focus:1 indicates:1 helpful:1 inference:4 flaw:1 dayan:5 dependent:1 accumulated:1 integrated:1 eliminate:1 typically:1 initially:1 expand:1 rerun:1 issue:3 classification:1 priori:2 integration:1 construct:1 once:1 having:1 sampling:3 constitutes:2 inevitable:1 future:1 report:1 connectionist:1 simplify:1 employ:2 few:3 preserve:1 simultaneously:1 delayed:1 phase:2 recalling:1 possibility:2 highly:1 evaluation:2 severe:3 recreation:1 entailed:1 truly:1 light:1 wc1n:1 implication:1 accurate:5 waltz:1 closer:1 partial:2 experience:21 unless:1 tree:16 incomplete:2 re:1 theoretical:4 psychological:4 increased:5 column:2 earlier:1 instance:3 facet:1 ar:1 queen:1 assignment:1 deviation:1 subset:1 successful:1 too:1 characterize:1 stored:1 answer:3 density:1 ie:3 off:3 modelbased:3 again:1 thesis:1 choose:1 prefrontal:5 possibly:1 necessitate:1 worse:2 inefficient:2 leading:1 return:2 account:2 suggesting:1 potential:1 depends:2 idealized:1 later:1 root:2 try:1 lab:1 treelet:1 analyze:4 doing:2 reached:1 closed:2 red:4 maintains:1 parallel:4 complicated:1 contribution:2 square:1 nathaniel:1 variance:4 characteristic:3 crippling:1 who:1 dissociation:1 outweighed:1 emphasizes:1 iid:2 mere:1 trajectory:1 lengyel:2 oscillatory:1 implausible:1 whenever:1 ed:2 competitor:1 failure:1 acquisition:1 involved:5 obvious:2 naturally:1 associated:9 sampled:1 recall:2 knowledge:2 color:2 actually:3 follow:1 improved:1 formulation:2 box:1 though:1 just:5 stage:2 governing:1 correlation:1 crudely:1 working:2 hand:1 horizontal:1 cohn:1 western:1 rodriguez:1 somehow:1 perhaps:1 mdp:5 building:2 effect:14 normalized:1 true:4 brown:2 unbiased:1 former:2 analytically:2 hence:1 semantic:10 illustrated:1 white:1 game:1 branching:3 during:1 noted:1 fet:1 rat:1 hippocampal:4 trying:1 modelfree:1 evident:1 theoretic:1 demonstrate:1 mcdonald:1 reasoning:2 recently:1 funding:1 superior:1 functional:1 empirically:1 arbitrariness:1 poldrack:2 prelimbic:1 volume:1 discussed:1 association:1 slight:2 interpret:1 significant:2 cambridge:5 counteracted:1 neuromodulatory:1 philos:1 stochasticity:1 habitual:7 actor:2 cortex:6 access:2 longer:1 dorsomedial:1 dominant:1 posterior:1 multivariate:2 recent:2 showed:3 perspective:2 forcing:1 store:4 success:1 stanfill:1 seen:1 additional:2 greater:2 impose:1 remembering:3 paradigm:2 maximize:1 dashed:4 multiple:7 neurally:2 full:2 technical:1 characterized:2 calculation:1 long:2 equally:1 coded:1 impact:1 prediction:2 involving:2 instantiating:1 controller:45 circumstance:2 pernicious:1 minerva:1 psychologically:2 represent:1 kernel:1 smarter:1 achieved:1 justified:1 whereas:1 separately:1 decreased:1 wealth:1 appropriately:1 extra:3 sch:1 unlike:2 subject:6 induced:2 validating:1 seem:1 near:1 intermediate:1 enough:1 independence:1 affect:1 identified:1 suboptimal:1 opposite:1 tradeoff:3 deleterious:1 specialization:1 whether:1 utility:1 ultimate:1 suffer:2 peter:1 proceed:1 speaking:1 behav:2 action:31 adequate:1 useful:4 clear:2 detailed:1 amount:5 dark:2 neocortex:1 tenenbaum:1 mcclelland:1 neuroscience:1 arising:3 per:1 blue:5 probed:1 vol:2 dominance:1 key:2 four:1 ist:1 demonstrating:1 drawn:3 license:1 changing:1 dally:1 backward:1 lmate:1 sum:1 parameterized:1 uncertainty:10 place:2 reasonable:1 looser:1 draw:1 decision:10 collegium:1 scaling:2 dorsolateral:4 bound:2 followed:1 distinguish:1 fan:1 yielded:1 constraint:1 argument:2 extremely:2 lond:1 department:1 trumpington:1 structured:5 according:1 combination:1 poor:1 describes:1 slightly:1 increasingly:1 rev:1 making:1 happens:1 psychologist:1 intuitively:1 taken:1 computationally:4 segregation:1 resource:1 equation:1 turn:2 count:1 mechanism:1 tractable:1 end:1 rubric:1 available:3 gaussians:1 opponent:1 apply:1 observe:1 progression:1 appropriate:2 ubiquity:1 tolerable:1 nois:3 alternative:1 slower:1 tmdp:5 dirichlet:2 unfortunate:1 marginalized:1 paradoxical:1 neglect:1 exploit:1 build:1 dominion:1 question:3 quantity:2 looked:1 strategy:4 usual:1 diagonal:1 distance:1 unable:1 sci:1 recollection:1 street:1 originate:1 argue:1 collected:2 extent:1 trivial:1 declarative:2 barely:1 water:1 toward:1 assuming:1 modeled:1 difficult:4 unfortunately:3 mostly:1 disastrous:1 striatal:3 potentially:1 unknown:2 perform:2 upper:2 observation:1 markov:4 enabling:1 finite:2 supporting:2 extended:1 variability:2 communication:1 clayton:2 namely:2 pair:3 extensive:1 recalled:1 distinction:1 boost:1 inaccuracy:4 daw:3 trans:1 able:3 bar:1 suggested:2 beyond:1 cripple:1 sparsity:1 challenge:2 packard:1 memory:32 critical:3 event:2 treated:1 rely:1 natural:1 recursion:1 advanced:1 replaying:1 mdps:6 imply:1 psychol:1 coupled:1 breach:1 deviate:1 prior:4 interdependent:1 understanding:1 acknowledgement:1 asymptotic:8 relative:2 stumble:1 fully:1 expect:6 rationale:1 suggestion:1 generation:1 interesting:1 foundation:1 nucleus:1 ripe:1 sufficient:1 viewpoint:1 corrupting:3 charitable:1 critic:2 course:7 consolidation:1 repeat:1 free:7 bias:1 allow:1 understand:2 generalise:1 institute:1 formal:1 characterizing:3 face:2 taking:1 benefit:3 regard:1 curve:9 calculated:1 depth:6 world:1 transition:11 cortical:1 maze:1 forward:11 made:2 reinforcement:3 far:4 social:1 kording:1 excess:2 approximate:7 pruning:1 janus:1 ml:2 active:1 mem:2 assumed:3 search:5 why:3 reality:1 additionally:1 learn:4 transfer:6 nature:5 reasonably:1 inherently:1 ignoring:1 symmetry:1 interact:1 investigated:1 complex:3 excellent:1 priming:1 domain:2 main:3 tmdps:5 neurosci:4 arrow:1 whole:1 noise:33 arise:1 lesion:1 complementary:1 body:1 fig:1 gatsby:4 experienced:2 sub:2 comput:1 candidate:1 replay:1 watkins:1 third:2 weighting:1 jay:2 learns:2 down:2 specific:1 normative:6 r2:1 pz:1 beaten:1 consequent:2 evidence:5 mcgaugh:1 intractable:1 albeit:1 sequential:2 adding:3 effectively:1 phd:1 magnitude:1 nat:3 demand:2 horizon:1 led:1 simply:1 explore:3 likely:4 infinitely:2 happening:1 radically:1 corresponds:1 relies:1 extracted:1 ma:2 acm:1 hintzman:1 goal:4 ann:1 labelled:1 owen:1 adverse:1 change:2 included:1 determined:2 specifically:1 operates:1 shadmehr:1 averaging:1 sampler:1 instrum:1 kearns:2 catastrophic:1 experimental:2 premium:1 infralimbic:1 formally:2 latter:2 arises:1 paralleled:1 phenomenon:1 arbitration:3 caudate:1 biol:1 |
2,550 | 3,312 | On Sparsity and Overcompleteness in Image Models
Pietro Berkes, Richard Turner, and Maneesh Sahani
Gatsby Computational Neuroscience Unit, UCL
Alexandra House, 17 Queen Square, London WC1N 3AR
Abstract
Computational models of visual cortex, and in particular those based on sparse
coding, have enjoyed much recent attention. Despite this currency, the question
of how sparse or how over-complete a sparse representation should be, has gone
without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best
modelled by extremely sparse distributions; although for the Student-t prior, the
associated optimal basis size is only modestly over-complete.
1
Introduction
Computational models of visual cortex, and in particular those based on sparse coding, have recently enjoyed much attention. The basic assumption behind sparse coding is that natural scenes are
composed of structural primitives (edges or lines, for example) and, although there are a potentially
large number of these primitives, typically only a few are active in a single natural scene (hence the
term sparse, [1, 2]). The claim is that cortical processing uses these statistical regularities to shape
a representation of natural scenes, and in particular converts the pixel-based representation at the
retina to a higher-level representation in terms of these structural primitives.
Traditionally, research has focused on determining the characteristics of the structural primitives and
comparing their representational properties with those of V1. This has been a successful enterprise,
but as a consequence other important questions have been neglected. The two we focus on here
are: How large is the set of structural primitives best suited to describe all natural scenes (how
over-complete), and how many primitives are active in a single scene (how sparse)? We will also be
interested in the coupling between sparseness and over-completeness. The intuition is that, if there
are a great number of structural primitives, they can be very specific and only a small number will
be active in a visual scene. Conversely if there are a small number they have to be more general and
a larger number will be active on average. We attempt to map this coupling by evaluating models
with different over-completenesses and sparsenesses and discover where natural scenes live along
this trade-off (see Fig. 1).
In order to test the sparse coding hypothesis it is necessary to build algorithms that both learn the
primitives and decompose natural scenes in terms of them. There have been many ways to derive
such algorithms, but one of the more successful is to regard the task of building a representation
of natural scenes as one of probabilistic inference. More specifically, the unknown activities of the
structural primitives are viewed as latent variables that must be inferred from the natural scene data.
Commonly the inference is carried out by writing down a generative model (although see [3] for an
alternative), which formalises the assumptions made about the data and latent variables. The rules
of probability are then used to derive inference and learning algorithms.
Unfortunately the assumption that natural scenes are composed of a small number of structural
primitives is not sufficient to build a meaningful generative model. Other assumptions must therefore
be made and typically these are that the primitives occur independently, and combine linearly. These
1
10
overcompleteness
8
6
4
2
0
sparsity
Figure 1: Schematic showing the space of possible sparse coding models in terms of sparseness (increasing in
the direction of the arrow) and over-completeness. For reference, complete models lie along the dashed black
line. Ideally every model could be evaluated (e.g. via their marginal likelihood or cross-validation) and the
grey contours illustrate what we might expect to discover if this were possible: The solid black line illustrates
the hypothesised trade-off between over-completeness and sparsity, whilst the star shows the optimal point in
this trade-off.
are drastic approximations and it is an open question to what extent this affects the results of sparse
coding. The distribution over the latent variables xt,k is chosen to be sparse and typical choices
are Student-t, a Mixture of Gaussians (with zero means), and the Generalised Gaussian (which
includes the Laplace distribution). The output yt is then given by a linear combination of the K,
D-dimensional structural primitives gk , weighted by their activities, plus some additive Gaussian
noise (the model reduces to independent components analysis in the absence of this noise [4]),
p(xt,k |?) = psparse (?)
(1)
p(yt |xt , G) = Nyt(Gxt , ?y ) .
(2)
The goal of this paper will be to learn the optimal dimensionality of the latent variables (K) and
the optimal sparseness of the prior (?). In order to do this a notion of optimality has to be defined.
One option is to train many different sparse-coding models and find the one which is most ?similar?
to visual processing. (Indeed this might be a fair characterisation of much of the current activity in
field.) However, this is fraught with difficulty not least as it is unclear how recognition models map
to neural processes. We believe the more consistent approach is, once again, to use the Bayesian
framework and view this as a problem of probabilistic inference. In fact, if the hypothesis is that the
visual system is implementing an optimal generative model, then questions of over-completeness
and sparsity should be addressed in this context.
Unfortunately, this is not a simple task and quite sophisticated machine-learning algorithms have
to be harnessed in order to answer these seemingly simple questions. In the first part of this paper
we describe these algorithms and then validate them using artificial data. Finally, we present results
concerning the optimal sparseness and over-completeness for natural image patches in the case that
the prior is a Student-t distribution.
2
Model
As discussed earlier, there are many variants of sparse-coding. Here, we focus on the Student-t prior
for the latent variables xt,k :
? ?+1
2
? ?+1
1 xt,k 2
2
1+
p(xt,k |?, ?) = ?
(3)
?
?
?
? ?? ? 2
There are two main reasons for this choice: The first is that this is a widely used model [1]. The
second is that by implementing the Student-t prior using an auxiliary variable, all the distributions in
the generative model become members of the exponential family [5]. This means it is easy to derive
efficient approximate inference schemes like variational Bayes and Gibbs sampling.
The auxiliary variable method is based on the observation that a Student-t distribution is a continuous
mixture of zero-mean Gaussians, whose mixing proportions are given by a Gamma distribution over
2
the precisions. This indicates that we can exchange the Student-t prior for a two-step prior in which
we first draw a precision from a Gamma distribution and then draw an activation from a Gaussian
with that precision,
? 2
p(ut,k |?, ?) = Gut,k
,
,
(4)
2 ??2
p(xt,k |ut,k ) = Nxt,k 0, u?1
(5)
t,k ,
p(yt |xt , G) = Nyt(Gxt , ?y ) ,
?y := diag ?y2 .
(6)
(7)
This model produces data which are often near zero, but occasionally highly non-zero. These nonzero elements form star-like patterns, where the points of the star are determined by the direction of
the weights (e.g., Fig. 2).
One of the major technical difficulties posed by sparse-coding is that, in the over-complete regime,
the posterior distribution of the latent variables p(X|Y, ?) is often complex and multi-modal. Approximation schemes are therefore required, but we must be careful to ensure that the scheme we
choose does not bias the conclusions we are trying to draw. This is true for any application of sparse
coding, but is particularly pertinent for our problem as we will be quantitatively comparing different
sparse-coding models.
3
Bayesian Model Comparison
A possible strategy for investigating the sparseness/over-completeness coupling would be to tile
the space with models and learn the parameters at each point (as schematised in Fig. 1). A model
comparison criterion could then be used to rank the models, and to find the optimal sparseness/overcompleteness. One such criterion would be to use cross validation and evaluate the likelihoods on
some held-out test data. Another is to use (approximate) Bayesian Model Comparison, and it is on
this method that we focus.
To evaluate the plausibility of two alternative versions of a model M, each with a different setting
of the hyperparameters ?1 and ?2 , in the light of some data Y , we compute the evidence [6]:
p(Y |M, ?1 ) P (M, ?1 )
p(M, ?1 |Y )
=
.
p(M, ?2 |Y )
p(Y |M, ?2 ) P (M, ?2 )
(8)
Since we do not have any reason a priori to prefer one particular configuration of hyperparameters
to another, we take the prior terms P (M, ?i ) to be equal, which leaves us with the ratio of the
marginal-likelihoods (or Bayes Factor),
P (Y |M, ?1 )
,
P (Y |M, ?2 )
(9)
The marginal-likelihoods themselves are hard to compute, being formed from high dimensional
integrals over the latent variables V and parameters ?,
Z
p(Y |M, ?i ) = dV d? p(Y, V, ?|M, ?i )
(10)
Z
= dV d? p(Y, V |?, M, ?i )p(?|M, ?i ) .
(11)
One concern in model comparison might be that the more complex models (those which are more
over-complete) have a larger number parameters and therefore ?fit? any data set better. However, the
Bayes factor (Eq. 9) implicitly implements a probabilistic version of Occam?s razor that penalises
more complex models and mitigates this effect [6]. This makes the Bayesian method appealing for
determining the over-completeness of a sparse-coding model.
Unfortunately computing the marginal-likelihood is computationally intensive, and this precludes
tiling the sparseness/over-completeness space. However, an alternative is to learn the optimal overcompleteness at a given sparseness using automatic relevance determination (ARD) [7, 8]. The
3
advantage of ARD is that it changes a hard and lengthy model comparison problem (i.e., computing
the marginal-likelihood for many models of differing dimensionalities) into a much simpler inference problem. In a nutshell, the idea is to equip the model with many more components than are
believed to be present in the data, and to let it prune out the weights which are unnecessary. Practically this involves placing a (Gaussian) prior over the components which favours small weights,
and then inferring the scale of this prior. In this way the scale of the superfluous weights is driven to
zero, removing them from the model. The necessary ARD hyper-priors are
p(gk |?k ) = Ngk 0, ?k?1 ,
(12)
p(?k ) = G?k(?k , lk ) .
4
(13)
Determining the over-completeness: Variational Bayes
In the previous two sections we described a generative model for sparse coding that is theoretically
able to learn the optimal over-completeness of natural scenes. We have two distinct uses for this
model: The first, and computationally more demanding task, is to learn the over-completeness at a
variety of different, fixed, sparsenesses (that is, to find the optimal over-completeness in a vertical
slice through Fig. 1); The second is to determine the optimal point on this trade-off by evaluating
the (approximate) marginal-likelihood (that is, evaluating points along the trade-off line in Fig. 1 to
find the optimal model - the star). It turns out that no single method is able to solve both these tasks,
but that it is possible to develop a pair of approximate algorithms to solve them separately. The
first approximation scheme is Variational Bayes (VB), and it excels at the first task, but is severely
biased in the case of the second. The second scheme is Annealed Importance Sampling (AIS) which
is prohibitively slow for the first task, but much more accurate on the second. We describe them in
turn, starting with VB.
The quantity required for learning is the marginal-likelihood,
Z
log p(Y |M, ?) = log
dV d? p(Y, V, ?|M, ?).
(14)
Computing this integral is intractable (for reasons similar to those given in Sec. 2), but a lowerbound can be constructed by introducing any distribution over the latent variables and parameters,
q(V, ?), and using Jensen?s inequality,
Z
p(Y, V, ?|M, ?)
log p(Y |M, ?) ? dV d? q(V, ?) log
=: F(q(V, ?))
(15)
q(V, ?)
= log p(Y |M, ?) ? KL(q(V, ?)||p(V, ?|Y ))
(16)
This lower-bound is called the free-energy, and the idea is to repeatedly optimise it with respect
to the distribution q(V, ?) so that it becomes as close to the true marginal likelihood as possible.
Clearly the optimal choice for q(V, ?) is the (intractable) true posterior. However, by constraining
this distribution headway can be made. In particular if we assume that the set of parameters and
set of latent variables are independent in the posterior, so that q(V, ?) = q(V )q(?) then we can
sequentially optimise the free-energy with respect to each of these distributions. For large hierarchical models, including the one described in this paper, it is often necessary to introduce further
factorisations within these two distributions in order to derive the updates. Their general form is,
q(Vi ) ? exp hlog p(V, ?)iq(?) Q
q(?i ) ? exp hlog p(V, ?)iq(V )
j6=i
Q
j6=i
(17)
q(Vi )
q(?i )
.
(18)
As the Bayesian Sparse Coding model is composed of distributions from the exponential family, the
functional form of these updates is the same as the corresponding priors. So, for example the latent
variables have the following form: q(xt ) is Gaussian and q(ut,k ) is Gamma distributed.
Although this approximation is good at discovering the over-completeness of data at fixed sparsities,
it provides an estimate of the marginal-likelihood (the free-energy) which is biased toward regions of
low sparsity. The reason is simple to understand. The difference between the free energy and the true
likelihood is given by the KL divergence between the approximate and true posterior. Thus, the freeenergy bound is tightest in regions where q(V, ?) is a good match to the true posterior, and loosest in
4
regions where it is a poor match. At high sparsities, the true posterior is multimodal and highly nonGaussian. In this regime q(V, ?) ? which is always uni-modal ? is a poor approximation. At lowsparsities the prior becomes Gaussian-like and the posterior also becomes a uni-modal Gaussian.
In this regime q(V, ?) is an excellent approximation. This leads to a consistent bias in the peak of
the free-energy toward regions of low sparsity. One might also be concerned with another potential
source of bias: The number of modes in the posterior increases with the number of components
in the model, which gives a worse match to the variational approximation for more over-complete
models. However, because of the sparseness of the prior distribution, most of the modes are going
to be very shallow for typical inputs, so that this effect should be small. We verify this claim on
artificial data in Section 6.2.
5
Determining the sparsity: AIS
An approximation scheme is required to estimate the marginal-likelihood, but without a sparsitydependent bias. Any scheme which uses a uni-modal approximation to the posterior will inevitably
fall victim to such biases. This rules out many alternate variational schemes, as well as methods
like the Laplace approximation, or Expectation Propagation. One alternative might be to use a
variational method which has a multi-modal approximating distribution (e.g. a mixture model). The
approach taken here is to use Annealed Importance Sampling (AIS) [9] which is one of the few
methods for evaluating normalising constants of intractable distributions. The basic idea behind
AIS is to estimate the marginal-likelihood using importance sampling. The twist is that the proposal
distribution for the importance sampler is itself generated using an MCMC method. Briefly, this
inner loop starts by drawing samples from the model?s prior distribution and continues to sample
as the prior is deformed into the posterior, according to an annealing schedule. Both the details of
this schedule, and having a quick-mixing MCMC method, are critical for good results. In fact it is
simple to derive a quick-mixing Gibbs sampler for our application and this makes AIS particularly
appealing.
6
Results
Before tackling natural images, it is necessary to verify that the approximations can discover the
correct degree of over-completeness and sparsity in the case where the data are drawn from the
forward model. This is done in two stages: Firstly we focus on a very simple, low-dimensional
example that is easy to visualise and which helps explicate the learning algorithms, allowing them
to be tuned; Secondly, we turn to a larger scale example designed to be as similar to the tests on
natural data as possible.
6.1
Verification using simple artifical data
In the first experiment the training data are produced as follows: Two-dimensional observations
are generated by three Student-t sources with degree of freedom chosen to be 2.5. The generative
weights are fixed to be 60 degrees apart from one another, as shown in Figure 2.
A series of VB simulations were then run, differing only in the sparseness level (as measured by
the degrees of freedom of the Student-t distribution over xt ). Each simulation consisted of 500 VB
iterations performed on a set of 3000 data points randomly generated from the model. We initialised
the simulations with K = 7 components. To improve convergence, we started the simulations with
weights near the origin (drawn from a normal distribution with mean 0 and standard deviation 10?8 )
and a relatively large input noise variance, and annealed the noise variance between the iterations
of VBEM. The annealing schedule was as following: we started with ?y2 = 0.3 for 100 iterations,
reduced this linearly down to ?y2 = 0.1 in 100 iterations, and finally to ?y2 = 0.01 in a further 50
iterations. During the annealing process, the weights typically grew from the origin and spread in all
directions to cover the input space. After an initial growth period, where the representation usually
became as over-complete as allowed by the model, some of the weights rapidly shrank again and
collapsed to the origin. At the same time, the corresponding precision hyperparameters grew and
effectively pruned the unnecessary components. We performed 7 blocks of simulations at different
sparseness levels. In every block we performed 3 runs of the algorithm and retained the result with
the highest free energy.
5
?8000
?6500
?8200
?6550
2
0
2
?8400
log P(Y) (NATS)
free energy (NATS)
4
?8600
?8800
?9000
?9200
?9400
?6600
?6650
?6700
?6750
?6800
?9600
4
2.1
4
4
2
0
2
2.2
2.4
?
2.5
3.0
3.5
?6850
2.1
2.2
2.4
?
2.5
3.0
3.5
Figure 2: Left: Test data drawn from the simple artificial model. Centre: Free energy of the models learned by
VBEM in the artificial data case. Right: Estimated log marginal likelihood. Error bars are 3 times the estimated
standard deviation.
The marginal likelihoods of the selected results were then estimated using AIS. We derived the
importance weights using a fixed data set with 2500 data points, 250 samples, and 300 intermediate
distributions. Following the recommendations in [9], the annealing schedule was chosen to be linear
initially (with 50 inverse temperatures spaced uniformly from 0 to 0.01), followed by a geometric
section (250 inverse temperatures spaced geometrically from 0.01 to 1). This mean that there were
a total of 300 distributions between the prior and posterior.
The results indicate that the combination of the two methods is successful at learning both the overcompleteness and sparseness. In particular the VBEM algorithm was able to recover the correct
dimensionality for all sparseness levels, except for the sparsest case ? = 2.1, where it preferred a
model with 5 significant components. As expected, however, figure 2 shows that the maximum free
energy is biased toward the more Gaussian models. In contrast to this, the marginal likelihood estimated by AIS (Fig. 2), which is strictly greater than the free-energy as expected, favours sparseness
levels close to the true value.
6.2
Verification using complex artificial data
Although it is necessary that the inference scheme should pass simple tests like that in the previous
section, they are not sufficient to give us confidence that it will perform successfully on natural
data. One pertinent criticism is that the regime in which we tested the algorithms in the previous
section (two dimensional observations, and three hidden latents) is quite different from that required
to model natural data. To that end, in this section we first learn a sparse model for natural images
with fixed over-completeness levels using a Maximum A Posteriori (MAP) algorithm [2] (degree
of freedom 2.5). These solutions are then used to generate artificial data as in the previous section.
The goal is to validate the model on data which has a content and scale similar to the natural images
case, but with a controlled number of generative components.
The image data comprised patches of size 9 ? 9 pixels, taken at random positions from 36 natural
images randomly selected from the van Hateren database (preprocessed as described in [10]). The
patches were whitened and their dimensionality reduced from 81 to 36 by principal component
analysis. The MAP solution was trained for 500 iterations, with every iteration performed on a new
batch of 1440 patches (100 patches per image).
The model was initialised with a 3-times over-complete number of components (K = 108). As
above, the weights were initialised near the origin, and the input noise was annealed linearly from
?d = 0.5 to ?d = 0.2 in the first 300 iterations, remaining constant thereafter. Every run consisted
of 500 VBEM iterations, with every iteration performed on 3600 patches generated from the MAP
solution. We performed several simulations for over-completeness levels between 0.5 and 4.5, and
retained the solutions with the highest free energy.
The results are summarised in Figure 3: The model is able to recover the underlying dimensionality
for data between 0.5 and 2 times over-complete, and correctly saturates to 3 times over-complete
(the maximum attainable level here) when the data over-completeness exceeds 3. In the regime
between 2.5 and 3 times over-complete data, the model returns solutions with a smaller number of
components, which is possibly due to the bias described at the end of Section 5. However, these
6
Inferred Overcompleteness
3
2.5
2
1.5
1
0.5
0.5
1
1.5
2
2.5
3
3.5
4
4.5
True Overcompleteness
Figure 3: True versus inferred over-completeness from data drawn from the forward model trained on natural
images. If inference was perfect, the true over-completeness would be recovered (black line). This straight
line saturates when we hit the number of latent variables with which ARD was initialised (three times overcomplete). The results using multiple runs of ARD are close to this line (open circles, simulations with the
highest free-energy are shown as closed circles). The maximal and best over-completeness inferred from natural
scenes is shown by the dotted line, and lies well below the over-completenesses we are able to infer.
values are still far above the highest over-completeness learned from natural images (see section
6.3), so that we believe that the bias does not invalidate our conclusions.
6.3
Natural images
Having established that the model performs as expected, at least when the data is drawn from the
forward model, we now turn to natural image data and examine the optimal over-completeness ratio
and sparseness degree for natural scene statistics.
The image data for this simulation and the model initialisation and annealing procedure are identical
to the ones in the experiments in the preceeding section. We performed 20 simulations with different
sparseness levels, especially concentrated on the more sparse values. Every run comprised 500
VBEM iterations, with every iteration performed on a new batch of 3600 patches.
As shown in Figure 4, the free energy increased almost monotonically until ? = 5 and then stabilised
and started to decrease for more Gaussian models. The algorithm learnt models that were only
slightly over-complete: the over-completeness ratio was distributed between 1 and 1.3, with a trend
for being more over-complete at high sparseness levels (Fig. 4). Although this general trend accords
with the intuition that sparseness and over-completeness are coupled, both the magnitude of the
effect and the degree of over-completeness is smaller than might have been anticipated. Indeed, this
result suggests that highly over-complete models with a Student-t prior may very well be overfitting
the data.
Finally we performed AIS using the same annealing schedule as in Section 6.1, using 250 samples
for the first 6 sparseness levels and 50 for the successive 14. The estimates obtained for the log
marginal likelihood, shown in Figure 4, were monotonically increasing with increasing sparseness
(decreasing ?). This indicates that sparse models are indeed optimal for natural scenes. Note that
this is exactly the opposite trend to that of the free energy, indicating that it is also biased for natural
scenes. Figure 4 shows the basis vectors learned in the simulation with ? = 2.09, which had
maximal marginal likelihood. The weights resemble the Gabor wavelets, typical of sparse codes for
natural images [1].
7
Discussion
Our results suggest that the optimal sparse-coding model for natural scenes is indeed one which
is very sparse, but only modestly over-complete. The anticipated coupling between the degree of
sparsity and the over-completeness in the model is visible, but is weak.
One crucial question is how far these results will generalise to other prior distributions; and indeed,
which of the various possible sparse-coding priors is best able to capture the structure of natural
scenes. One indication that the Student-t might not be optimal, is its behaviour as the degree-of7
a)
b)
4
x 10
4
?8.1
x 10
?8.2
?8.9
log P(Y) (NATS)
free energy (NATS)
?8.8
?9
?9.1
4
6
?
8
?8.6
10
2
3
4
5
?
6
7
8
9
d)
1.4
overcompleteness ratio
?8.4
?8.5
?9.2
2
c)
?8.3
1.3
1.2
1.1
1
2
4
?
6
8
Figure 4: Natural images results. a) Free energy b) Marginal likelihood c) Estimated over-completeness d)
Basis vectors
freedom parameter moves towards sparser values. The distribution puts a very small amount of
mass at a very great distance from the mean (for example, the kurtosis is undefined for ? < 4). It
is not clear that data with such extreme values will be encountered in typical data sets, and so the
model may become distorted at high sparseness values.
Future work will be directed towards more general prior distributions. The formulation of the
Student-t in terms of a random precision Gaussian is computationally helpful. While no longer
within the exponential family, other distributions on the precision (such as a uniform one) may be
approximated using a similar approach.
Acknowledgements
This work has been supported by the Gatsby Charitable Foundation. We thank Yee Whye Teh, Iain
Murray, and David McKay for fruitful discussions.
References
[1] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381(6583):607?609, 1996.
[2] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
V1? Vision Research, 37:3311?3325, 1997.
[3] Y.W Teh, M. Welling, S. Osindero, and G.E. Hinton. Energy-based models for sparse overcomplete
representations. Journal of Machine Learning Research, 4:1235?1260, 2003.
[4] A.J. Bell and T.J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Research, 37(23):3327?3338, 1997.
[5] S. Osindero, M. Welling, and G.E. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18:381?344, 2006.
[6] D.J.C. McKay. Bayesian interpolation. Neural Comput, 4(3):415?447, 1992.
[7] C.M. Bishop. Variational principal components. In ICANN 1999 Proceedings, pages 509?514, 1999.
[8] M.J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003.
[9] R.M. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[10] J.H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc. R. Soc. Lond. B, 265:359?366, 1998.
8
| 3312 |@word deformed:1 version:2 briefly:1 proportion:1 open:2 grey:1 simulation:10 attainable:1 solid:1 initial:1 configuration:1 series:1 initialisation:1 tuned:1 current:1 comparing:2 recovered:1 activation:1 tackling:1 must:3 additive:1 visible:1 shape:1 pertinent:2 designed:1 update:2 generative:7 leaf:1 discovering:1 selected:2 normalising:1 completeness:29 provides:1 penalises:1 successive:1 firstly:1 simpler:1 along:3 enterprise:1 constructed:1 become:2 combine:1 introduce:1 theoretically:1 expected:3 indeed:6 themselves:1 examine:1 multi:2 decreasing:1 increasing:3 becomes:3 discover:3 underlying:1 mass:1 what:2 whilst:1 differing:2 every:7 growth:1 nutshell:1 exactly:1 prohibitively:1 hit:1 unit:2 generalised:1 before:1 consequence:1 severely:1 despite:1 interpolation:1 black:3 might:7 plus:1 conversely:1 suggests:1 gone:1 lowerbound:1 directed:1 block:2 implement:1 procedure:1 maneesh:1 bell:1 gabor:1 confidence:1 suggest:1 close:3 selection:1 put:1 context:1 live:1 writing:1 collapsed:1 yee:1 fruitful:1 map:5 quick:2 yt:3 of7:1 annealed:5 primitive:12 attention:2 starting:1 independently:1 focused:1 preceeding:1 factorisation:1 rule:2 iain:1 notion:1 traditionally:1 headway:1 laplace:2 us:3 hypothesis:2 origin:4 element:1 trend:3 recognition:1 particularly:2 approximated:1 continues:1 database:1 capture:1 region:4 hypothesised:1 trade:5 highest:4 decrease:1 principled:1 intuition:2 nats:4 ideally:1 neglected:1 trained:2 basis:4 multimodal:1 various:1 train:1 distinct:1 describe:3 london:2 sejnowski:1 artificial:6 hyper:1 quite:2 whose:1 larger:3 widely:1 posed:1 solve:2 victim:1 drawing:1 precludes:1 statistic:3 topographic:1 emergence:1 itself:1 seemingly:1 beal:1 advantage:1 indication:1 kurtosis:1 ucl:1 maximal:2 product:1 loop:1 rapidly:1 mixing:3 representational:1 validate:2 convergence:1 regularity:1 produce:1 perfect:1 help:1 coupling:4 derive:5 illustrate:1 develop:1 iq:2 measured:1 ard:5 eq:1 soc:1 auxiliary:2 involves:1 indicate:1 resemble:1 direction:3 correct:2 filter:2 implementing:2 exchange:1 behaviour:1 decompose:1 secondly:1 strictly:1 practically:1 normal:1 exp:2 great:2 claim:2 major:1 proc:1 overcompleteness:8 successfully:1 weighted:1 clearly:1 gaussian:10 always:1 gut:1 validated:1 focus:4 derived:1 rank:1 likelihood:19 indicates:2 contrast:1 criticism:1 posteriori:1 inference:9 helpful:1 typically:3 initially:1 hidden:1 going:1 interested:1 pixel:2 priori:1 schaaf:1 marginal:17 field:4 once:1 equal:1 having:3 sampling:5 identical:1 placing:1 anticipated:2 future:1 quantitatively:1 richard:1 few:2 retina:1 randomly:2 composed:3 gamma:3 divergence:1 attempt:1 freedom:4 highly:3 mixture:3 extreme:1 light:1 behind:2 superfluous:1 undefined:1 held:1 wc1n:1 accurate:1 edge:2 integral:2 necessary:5 circle:2 overcomplete:3 increased:1 vbem:5 earlier:1 ar:1 cover:1 queen:1 introducing:1 deviation:2 latents:1 mckay:2 uniform:1 comprised:2 successful:3 osindero:2 answer:2 learnt:1 peak:1 probabilistic:3 off:5 nongaussian:1 again:2 thesis:1 choose:1 possibly:1 tile:1 worse:1 explicate:1 return:1 toy:1 potential:1 star:4 coding:18 student:13 includes:1 sec:1 vi:2 performed:9 view:1 closed:1 start:1 bayes:5 option:1 recover:2 shrank:1 square:1 formed:1 became:1 variance:2 characteristic:1 spaced:2 modelled:1 bayesian:8 weak:1 produced:1 j6:2 straight:1 lengthy:1 energy:17 initialised:4 associated:1 ut:3 dimensionality:5 schedule:5 sophisticated:1 higher:1 stabilised:1 modal:5 formulation:1 evaluated:1 done:1 stage:1 until:1 propagation:1 mode:2 freeenergy:1 believe:2 alexandra:1 olshausen:2 building:1 effect:3 verify:2 y2:4 true:11 consisted:2 hence:1 nonzero:1 neal:1 during:1 razor:1 criterion:2 formalises:1 trying:1 whye:1 complete:16 performs:1 temperature:2 image:18 variational:8 recently:1 functional:1 twist:1 harnessed:1 visualise:1 discussed:1 significant:1 gibbs:2 ai:8 enjoyed:2 automatic:1 centre:1 had:1 loosest:1 cortex:3 invalidate:1 longer:1 berkes:1 posterior:11 recent:1 driven:1 apart:1 occasionally:1 inequality:1 der:1 greater:1 employed:1 prune:1 determine:1 period:1 monotonically:2 dashed:1 currency:1 multiple:1 infer:1 reduces:1 exceeds:1 technical:1 match:3 determination:1 plausibility:1 cross:2 believed:1 concerning:1 controlled:1 schematic:1 variant:1 basic:2 whitened:1 vision:2 expectation:1 iteration:12 accord:1 cell:2 proposal:1 separately:1 addressed:1 annealing:6 source:2 crucial:1 biased:4 member:1 structural:8 near:3 constraining:1 intermediate:1 easy:2 concerned:1 variety:1 affect:1 fit:1 opposite:1 inner:1 idea:3 intensive:1 favour:2 repeatedly:1 clear:1 amount:1 concentrated:1 reduced:2 generate:1 dotted:1 neuroscience:2 estimated:5 per:1 correctly:1 summarised:1 thereafter:1 drawn:5 characterisation:1 preprocessed:1 v1:2 nyt:2 pietro:1 geometrically:1 convert:1 run:5 inverse:2 distorted:1 family:3 almost:1 patch:7 draw:3 prefer:1 vb:4 bound:2 followed:1 encountered:1 activity:3 occur:1 scene:20 extremely:1 optimality:1 pruned:1 lond:1 relatively:1 according:1 alternate:1 combination:2 poor:2 smaller:2 slightly:1 appealing:2 shallow:1 dv:4 taken:2 computationally:3 turn:4 drastic:1 end:2 tiling:1 gaussians:2 tightest:1 hierarchical:1 alternative:4 batch:2 remaining:1 ensure:1 build:2 especially:1 approximating:1 murray:1 move:1 question:7 quantity:1 strategy:2 receptive:1 primary:1 modestly:2 unclear:1 excels:1 distance:1 thank:1 extent:1 reason:4 equip:1 toward:3 code:2 retained:2 ratio:4 psparse:1 unfortunately:3 hlog:2 potentially:1 gk:2 unknown:1 perform:1 allowing:1 teh:2 vertical:1 observation:3 inevitably:1 grew:2 saturates:2 hinton:2 inferred:4 david:1 pair:1 required:4 kl:2 learned:3 established:1 address:1 able:6 bar:1 usually:1 pattern:1 below:1 regime:5 sparsity:11 optimise:2 including:1 critical:1 demanding:1 natural:35 difficulty:2 turner:1 scheme:9 improve:1 lk:1 started:3 carried:1 coupled:1 sahani:1 prior:22 geometric:1 acknowledgement:1 determining:4 expect:1 versus:1 validation:2 foundation:1 degree:9 sufficient:2 consistent:2 verification:2 charitable:1 occam:1 supported:1 free:16 bias:7 understand:1 generalise:1 fall:1 sparse:31 distributed:2 regard:1 slice:1 van:3 cortical:1 evaluating:4 contour:1 forward:3 commonly:1 made:3 far:2 welling:2 approximate:6 uni:3 implicitly:1 preferred:1 active:4 investigating:1 sequentially:1 overfitting:1 unnecessary:2 continuous:1 latent:11 learn:7 nature:1 fraught:1 excellent:1 complex:4 diag:1 icann:1 main:1 spread:1 linearly:3 arrow:1 noise:5 hyperparameters:3 fair:1 allowed:1 fig:7 gxt:2 gatsby:3 slow:1 precision:6 inferring:1 position:1 sparsest:1 exponential:3 comput:1 lie:2 house:1 wavelet:1 down:2 removing:1 specific:1 xt:10 bishop:1 showing:1 mitigates:1 jensen:1 evidence:1 concern:1 intractable:3 effectively:1 importance:6 phd:1 magnitude:1 illustrates:1 sparseness:23 sparser:1 suited:1 visual:6 recommendation:1 viewed:1 goal:2 careful:1 towards:2 absence:1 content:1 hard:2 change:1 specifically:1 typical:4 determined:1 uniformly:1 sampler:2 except:1 principal:2 called:1 total:1 pas:1 meaningful:1 indicating:1 college:1 relevance:1 hateren:2 artifical:1 evaluate:2 mcmc:2 tested:1 |
2,551 | 3,313 | Sparse deep belief net model for visual area V2
Honglak Lee
Chaitanya Ekanadham
Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
{hllee,chaitu,ang}@cs.stanford.edu
Abstract
Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or ?deep,?
structure from unlabeled data. While several authors have formally or informally
compared their algorithms to computations performed in visual area V1 (and the
cochlea), little attempt has been made thus far to evaluate these algorithms in terms
of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics
certain properties of visual area V2. Specifically, we develop a sparse variant of
the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in
the network, and demonstrate that the first layer, similar to prior work on sparse
coding and ICA, results in localized, oriented, edge filters, similar to the Gabor
functions known to model V1 cell receptive fields. Further, the second layer in our
model encodes correlations of the first layer responses in the data. Specifically, it
picks up both colinear (?contour?) features as well as corners and junctions. More
interestingly, in a quantitative comparison, the encoding of these more complex
?corner? features matches well with the results from the Ito & Komatsu?s study
of biological V2 responses. This suggests that our sparse variant of deep belief
networks holds promise for modeling more higher-order features.
1
Introduction
The last few years have seen significant interest in ?deep? learning algorithms that learn layered,
hierarchical representations of high-dimensional data. [1, 2, 3, 4]. Much of this work appears to
have been motivated by the hierarchical organization of the cortex, and indeed authors frequently
compare their algorithms? output to the oriented simple cell receptive fields found in visual area V1.
(E.g., [5, 6, 2]) Indeed, some of these models are often viewed as first attempts to elucidate what
learning algorithm (if any) the cortex may be using to model natural image statistics.
However, to our knowledge no serious attempt has been made to directly relate, such as through
quantitative comparisons, the computations of these deep learning algorithms to areas deeper in the
cortical hierarchy, such as to visual areas V2, V4, etc. In this paper, we develop a sparse variant
of Hinton?s deep belief network algorithm, and measure the degree to which it faithfully mimics
biological measurements of V2. Specifically, we take Ito & Komatsu [7]?s characterization of V2 in
terms of its responses to a large class of angled bar stimuli, and quantitatively measure the degree to
which the deep belief network algorithm generates similar responses.
Deep architectures attempt to learn hierarchical structure, and hold the promise of being able to
first learn simple concepts, and then successfully build up more complex concepts by composing
together the simpler ones. For example, Hinton et al. [1] proposed an algorithm based on learning
individual layers of a hierarchical probabilistic graphical model from the bottom up. Bengio et al. [3]
proposed a similarly greedy algorithm, one based on autoencoders. Ranzato et al. [2] developed an
energy-based hierarchical algorithm, based on a sequence of sparsified autoencoders/decoders.
1
In related work, several studies have compared models such as these, as well as nonhierarchical/non-deep learning algorithms, to the response properties of neurons in area V1. A study
by van Hateren and van der Schaaf [8] showed that the filters learned by independent components
analysis (ICA) [9] on natural image data match very well with the classical receptive fields of V1
simple cells. (Filters learned by sparse coding [10, 11] also similarly give responses similar to V1
simple cells.) Our work takes inspiration from the work of van Hateren and van der Schaaf, and
represents a study that is done in a similar spirit, only extending the comparisons to a deeper area in
the cortical hierarchy, namely visual area V2.
2
Biological comparison
2.1 Features in early visual cortex: area V1
The selectivity of neurons for oriented bar stimuli in cortical area V1 has been well documented [12,
13]. The receptive field of simple cells in V1 are localized, oriented, bandpass filters that resemble
gabor filters. Several authors have proposed models that have been either formally or informally
shown to replicate the gabor-like properties of V1 simple cells. Many of these algorithms, such
as [10, 9, 8, 6], compute a (approximately or exactly) sparse representation of the natural stimuli
data. These results are consistent with the ?efficient coding hypothesis? which posits that the goal
of early visual processing is to encode visual information as efficiently as possible [14]. Some
hierarchical extensions of these models [15, 6, 16] are able to learn features that are more complex
than simple oriented bars. For example, hierarchical sparse models of natural images have accounted
for complex cell receptive fields [17], topography [18, 6], colinearity and contour coding [19]. Other
models, such as [20], have also been shown to give V1 complex cell-like properties.
2.2 Features in visual cortex area V2
It remains unknown to what extent the previously described algorithms can learn higher order features that are known to be encoded further down the ventral visual pathway. In addition, the response
properties of neurons in cortical areas receiving projections from area V1 (e.g., area V2) are not
nearly as well documented. It is uncertain what type of stimuli cause V2 neurons to respond optimally [21]. One V2 study by [22] reported that the receptive fields in this area were similar to those
in the neighboring areas V1 and V4. The authors interpreted their findings as suggestive that area
V2 may serve as a place where different channels of visual information are integrated. However,
quantitative accounts of responses in area V2 are few in number. In the literature, we identified two
sets of quantitative data that give us a good starting point for making measurements to determine
whether our algorithms may be computing similar functions as area V2.
In one of these studies, Ito and Komatsu [7] investigated how V2 neurons responded to angular stimuli. They summarized each neuron?s response with a two-dimensional visualization of the stimuli
set called an angle profile. By making several axial measurements within the profile, the authors
were able to compute various statistics about each neuron?s selectivity for angle width, angle orientation, and for each separate line component of the angle (see Figure 1). Approximately 80% of
the neurons responded to specific angle stimuli. They found neurons that were selective for only
one line component of its peak angle as well as neurons selective for both line components. These
neurons yielded angle profiles resembling those of Cell 2 and Cell 5 in Figure 1, respectively. In
addition, several neurons exhibited a high amount of selectivity for its peak angle producing angle
profiles like that of Cell 1 in Figure 1. No neurons were found that had more elongation in a diagonal axis than in the horizontal or vertical axes, indicating that neurons in V2 were not selective
for angle width or orientation. Therefore, an important conclusion made from [7] was that a V2
neuron?s response to an angle stimulus is highly dependent on its responses to each individual line
component of the angle. While the dependence was often observed to be simply additive, as was
the case with neurons yielding profiles like those of Cells 1 and 2 in Figure 1(right), this was not
always the case. 29 neurons had very small peak response areas and yielded profiles like that of Cell
1 in Figure 1(right), thus indicating a highly specific tuning to an angle stimulus. While the former
responses suggest a simple linear computation of V1 neural responses, the latter responses suggest
a nonlinear computation [21]. The analysis methods adopted in [7] are very useful in characterizing
the response properties, and we use these methods to evaluate our own model.
Another study by Hegde and Van Essen [23] studied the responses of a population of V2 neurons
to complex contour and grating stimuli. They found several V2 neurons responding maximally for
angles, and the distribution of peak angles for these neurons is consistent with that found by [7]. In
addition, several V2 neurons responded maximally for shapes such as intersections, tri-stars, fivepoint stars, circles, and arcs of varying length.
2
Figure 1: (Images from [7]; courtesy of Ito and Komatsu) Left: Visualization of angle profiles. The upper-right
and lower-left triangles contain the same stimuli. (A,B) Darkened squares correspond to stimuli that elicited a
large response. The peak responses are circled. (C) The arrangement of the figure is so that one line component
remains constant as one moves along any vertical or horizontal axis. (D) The angles width remains constant
as one moves along a the diagonal indicated (E) The angle orientation remains constant as one moves along
the diagonal indicated. After identifying the optimal stimuli for a neuron in the profile, the number of stimuli
along these various axes (as in C,D,E) eliciting responses larger than 80% of the peak response measure the
neuron?s tolerance to perturbations to the line components, peak angle width, and orientation, respectively.
Right: Examples of 4 typical angle profiles. As before, stimuli eliciting large responses are highlighted. Cell 1
has a selective response to a stimulus, so there is no elongation along any axis. Cell 2 has one axis of elongation,
indicating selectivity for one orientation. Cell 5 has two axes of elongation, and responds strongly so long as
either of two edge orientations is present. Cell 4 has no clear axis of elongation.
3
Algorithm
Hinton et al. [1] proposed an algorithm for learning deep belief networks, by treating each layer as a
restricted Boltzmann machine (RBM) and greedily training the network one layer at a time from the
bottom up [24, 1]. In general, however, RBMs tend to learn distributed, non-sparse representations.
Based on results from other methods (e.g., sparse coding [10, 11], ICA [9], heavy-tailed models [6],
and energy based models [2]), sparseness seems to play a key role in learning gabor-like filters.
Therefore, we modify Hinton et al.?s learning algorithm to enable deep belief nets to learn sparse
representations.
3.1 Sparse restricted Boltzmann machines
We begin by describing the restricted Boltzmann machine (RBM), and present a modified version of
it. An RBM has a set of hidden units h, a set of visible units v, and symmetric connections weights
between these two layers represented by a weight matrix W . Suppose that we want to model k
dimensional real-valued data using an undirected graphical model with n binary hidden units. The
1
negative log probability of any state in the RBM is given
? by the following energy function:?
? log P (v, h) = E(v, h) =
X
X
1 X 2
1 X
vi wij hj ? .
b j hj +
vi ? 2 ?
ci vi +
2
2? i
?
i,j
j
i
(1)
Here, ? is a parameter, hj are hidden unit variables, vi are visible unit variables. Informally, the
maximum likelihood parameter estimation problem corresponds to learning wij , ci and bj so as to
minimize the energy of states drawn from the data distribution, and raise the energy of states that
are improbable given the data.
Under this model, we can easily compute the conditional probability distributions. Holding either h
or v fixed, we can sample from the other as follows:
P
P (vi |h) = N ci + j wij hj , ? 2 ,
(2)
P
P (hj |v) = logistic ?12 (bj + i wij vi ) .
(3)
1
Due to space constraints, we present an energy function only for the case of real-valued visible units. It is
also straightforward to formulate a sparse
PRBM with
Pbinary-valued
P visible units; for example, we can write the
energy function as E(v, h) = ?1/? 2 ( i ci vi + j bj hj + i,j vi wij hj ) (see also [24]).
3
Here, N (?) is the gaussian density, and logistic(?) is the logistic function.
For training the parameters of the model, the objective is to maximize the log-likelihood of the data.
We also want hidden unit activations to be sparse; thus, we add a regularization term that penalizes
a deviation of the expected activation of the hidden units from a (low) fixed level p.2 Thus, given a
training set {v(1) , . . . , v(m) } comprising m examples, we pose the following optimization problem:
Pm
P
Pn
Pm
(l) (l) 2
1
minimize{wij ,ci ,bj } ? l=1 log h P (v(l) , h(l) ) + ? j=1 | p ? m
l=1 E[hj |v ] | , (4)
where E[?] is the conditional expectation given the data, ? is a regularization constant, and p is
a constant controlling the sparseness of the hidden units hj . Thus, our objective is the sum of a
log-likelihood term and a regularization term. In principle, we can apply gradient descent to this
problem; however, computing the gradient of the log-likelihood term is expensive. Fortunately, the
contrastive divergence learning algorithm gives an efficient approximation to the gradient of the loglikelihood [25]. Building upon this, on each iteration we can apply the contrastive divergence update
rule, followed by one step of gradient descent using the gradient of the regularization term.3 The
details of our procedure are summarized in Algorithm 1.
Algorithm 1 Sparse RBM learning algorithm
1. Update the parameters using contrastive divergence learning rule. More specifically,
wij := wij + ?(hvi hj idata ? hvi hj irecon )
ci := ci + ?(hvi idata ? hvi irecon )
bj := bj + ?(hbj idata ? hbj irecon ),
where ? is a learning rate, and h?irecon is an expectation over the reconstruction data, estimated
using one iteration of Gibbs sampling (as in Equations 2,3).
2. Update the parameters using the gradient of the regularization term.
3. Repeat Steps 1 and 2 until convergence.
3.2 Learning deep networks using sparse RBM
Once a layer of the network is trained, the parameters wij , bj , ci ?s are frozen and the hidden unit
values given the data are inferred. These inferred values serve as the ?data? used to train the next
higher layer in the network. Hinton et al. [1] showed that by repeatedly applying such a procedure,
one can learn a multilayered deep belief network. In some cases, this iterative ?greedy? algorithm
can further be shown to be optimizing a variational bound on the data likelihood, if each layer has
at least as many units as the layer below (although in practice this is not necessary to arrive at a
desirable solution; see [1] for a detailed discussion). In our experiments using natural images, we
learn a network with two hidden layers, with each layer learned using the sparse RBM algorithm
described in Section 3.1.
4
Visualization
4.1 Learning ?strokes? from handwritten digits
We applied the sparse RBM algorithm to the MNIST
handwritten digit dataset.4 We learned a sparse RBM
with 69 visible units and 200 hidden units. The learned
bases are shown in Figure 2. (Each basis corresponds to
one column of the weight matrix W left-multiplied by
the unwhitening matrix.) Many bases found by the algorithm roughly represent different ?strokes? of which
handwritten digits are comprised. This is consistent
2
Figure 2: Bases learned from MNIST data
Less formally, this regularization ensures that the ?firing rate? of the model neurons (corresponding to the
latent random variables hj ) are kept at a certain (fairly low) level, so that the activations of the model neurons
are sparse. Similar intuition was also used in other models (e.g., see Olshausen and Field [10]).
3
To increase computational efficiency, we made one additional change. Note that the regularization term is
defined using a sum over the entire training set; if we use stochastic gradient descent or mini-batches (small
subsets of the training data) to estimate this term, it results in biased estimates of the gradient. To ameliorate
this, we used mini-batches, but in the gradient step that tries to minimize the regularization term, we update
only the bias terms bj ?s (which directly control the degree to which the hidden units are activated, and thus their
sparsity), instead of updating all the parameters bj and wij ?s.
4
Downloaded from http://yann.lecun.com/exdb/mnist/. Each pixel was normalized to the
unit interval, and we used PCA whitening to reduce the dimension to 69 principal components for computational
efficiency. (Similar results were obtained without whitening.)
4
Figure 3: 400 first layer bases learned from the van Hateren natural image dataset, using our algorithm.
Figure 4: Visualization of 200 second layer bases (model V2 receptive fields), learned from natural images.
Each small group of 3-5 (arranged in a row) images shows one model V2 unit; the leftmost patch in the group
is a visualization of the model V2 basis, and is obtained by taking a weighted linear combination of the first
layer ?V1? bases to which it is connected. The next few patches in the group show the first layer bases that
have the strongest weight connection to the model V2 basis.
with results obtained by applying different algorithms to learn sparse representations of this data
set (e.g., [2, 5]).
4.2 Learning from natural images
We also applied the algorithm to a training set a set of 14-by-14 natural image patches, taken from
a dataset compiled by van Hateren.5 We learned a sparse RBM model with 196 visible units and
400 hidden units. The learned bases are shown in Figure 3; they are oriented, gabor-like bases and
resemble the receptive fields of V1 simple cells.6
4.3 Learning a two-layer model of natural images using sparse RBMs
We further learned a two-layer network by stacking one sparse RBM on top of another (see Section 3.2 for details.)7 After learning, the second layer weights were quite sparse?most of the
weights were very small, and only a few were either highly positive or highly negative. Positive
5
The images were obtained from http://hlab.phys.rug.nl/imlib/index.html. We used
100,000 14-by-14 image patches randomly sampled from an ensemble of 2000 images; each subset of 200
patches was used as a mini-batch.
6
Most other authors? experiments to date using regular (non-sparse) RBMs, when trained on such data,
seem to have learned relatively diffuse, unlocalized bases (ones that do not represent oriented edge filters).
While sensitive to the parameter settings and requiring a long training time, we found that it is possible in
some cases to get a regular RBM to learn oriented edge filter bases as well. But in our experiments, even in
these cases we found that repeating this process to build a two layer deep belief net (see Section 4.3) did not
encode a significant number of corners/angles, unlike one trained using the sparse RBM; therefore, it showed
significantly worse match to the Ito & Komatsu statistics. For example, the fraction of model V2 neurons that
respond strongly to a pair of edges near right angles (formally, have peak angle in the range 60-120 degrees)
was 2% for the regular RBM, whereas it was 17% for the sparse RBM (and Ito & Komatsu reported 22%). See
Section 5.1 for more details.
7
For the results reported in this paper, we trained the second layer sparse RBM with real-valued visible
units; however, the results were very similar when we trained the second layer sparse RBM with binary-valued
visible units (except that the second layer weights became less sparse).
5
Figure 5: Top: Visualization of four learned model V2 neurons. (Visualization in each row of four or five
patches follows format in Figure 4.) Bottom: Angle stimulus response profile for model V2 neurons in the top
row. The 36*36 grid of stimuli follows [7], in which the orientation of two lines are varied to form different
angles. As in Figure 1, darkened patches represent stimuli to which the model V2 neuron responds strongly;
also, a small black square indicates the overall peak response.
weights represent excitatory connections between model V1 and model V2 units, whereas negative
elements represent inhibitory connections. By visualizing the second layer bases as shown in Figure 4, we observed bases that encoded co-linear first layer bases as well as edge junctions. This
shows that by extending the sparse RBM to two layers and using greedy learning, the model is able
to learn bases that encode contours, angles, and junctions of edges.
5
Evaluation experiments
We now more quantitatively compare the algorithm?s learned responses to biological measurements.8
5.1 Method: Ito-Komatsu paper protocol
We now describe the procedure we used to compare our model with the experimental data in [7]. We
generated a stimulus set consisting of the same set of angles (pairs of edges) as [7]. To identify the
?center? of each model neuron?s receptive field, we translate all stimuli densely over the 14x14 input
image patch, and identify the position at which the maximum response is elicited. All measures are
then taken with all angle stimuli centered at this position.9
Using these stimuli, we compute the hidden unit probabilities from our model V1 and V2 neurons.
In other words, for each stimulus we compute the first hidden layer activation probabilities, then
feed this probability as data to the second hidden layer and compute the activation probabilities
again in the same manner. Following a protocol similar to [7], we also eliminate from consideration
the model neurons that do not respond strongly to corners and edges.10 Some representative results
are shown in Figure 5. (The four angle profiles shown are fairly typical of those obtained in our
experiments.) We see that all the V2 bases in Figure 5 have maximal response when its strongest
V1-basis components are aligned with the stimulus. Thus, some of these bases do indeed seem to
encode edge junctions or crossings.
We also compute similar summary statistics as [7] (described in Figure 1(C,D,E)), that more quantitatively measure the distribution of V2 or model V2 responses to the different angle stimuli. Figure 6
plots the responses of our model, together with V2 data taken from [7]. Along many dimensions,
the results from our model match that from the Macaque V2 fairly well.
8
The results we report below were very insensitive to the choices of ? and ?. We set ? to 0.4 and 0.05
for the first and second layers (chosen to be on the same scale as the standard deviation of the data and the
first-layer activations), and ? = 1/p in each layer. We used p = 0.02 and 0.05 for the first and second layers.
9
Other details: The stimulus set is created by generating a binary-mask image, that is then scaled to normalize contrast. To determine this scaling constant, we used single bar images by translating and rotating to
all possible positions, and fixed the constant such that the top 0.5% (over all translations and rotations) of the
stimuli activate the model V1 cells above 0.5. This normalization step corrects for the RBM having been trained
on a data distribution (natural images) that had very different contrast ranges than our test stimulus set.
10
In detail, we generated a set of random low-frequency stimulus, by generating small random KxK
(K=2,3,4) images with each pixel drawn from a standard normal distribution, and rescaled the image using
bicubic interpolation to 14x14 patches. These stimulus are scaled such that about 5% of the V2 bases fires
maximally to these random stimuli. We then exclude the V2 bases that are maximally activated to these random stimuli from the subsequent analysis.
6
peak angles
0.5
primary line axis
sparse DBN
Ito & Komatsu
0.4
0.2
sparse DBN
Ito & Komatsu
0.15
0.3
secondary line axis
0.5
0.4
sparse DBN
Ito & Komatsu
0.3
0.1
0.2
0.2
0.05
0.1
0
15
45
75
0.1
0
0
105 135 165
1 2 3 4 5 6 7 8 9 10 11
1 2 3 4 5 6 7 8 9 10 11
angle width axis
angle orientation axis
0.8
1
sparse DBN
sparse DBN
Ito & Komatsu
Ito & Komatsu
0.8
0.6
0.6
0.4
0.4
0.2
0
0.2
0
1 2 3 4 5 6 7 8 9 10 11
1 2 3 4 5 6 7 8 9 10 11
Figure 6: Images show distributions over stimulus response statistics (averaged over 10 trials) from our algorithm (blue) and in data taken from [7] (green). The five figures show respectively (i) the distribution over peak
angle response (ranging from 0 to 180 degrees; each bin represents a range of 30 degrees), (ii) distribution over
tolerance to primary line component (Figure 1C, in dominant vertical or horizontal direction), (iii) distribution
over tolerance to secondary line component (Figure 1C, in non-dominant direction), (iv) tolerance to angle
width (Figure 1D), (v) tolerance to angle orientation (Figure 1E). See Figure 1 caption, and [7], for details.
Figure 7: Visualization of a number of model V2 neurons that maximally respond to various complex stimuli.
Each row of seven images represents one V2 basis. In each row, the leftmost image shows a linear combination
of the top three weighted V1 components that comprise the V2 basis; the next three images show the top three
optimal stiimuli; and the last three images show the top three weighted V1 bases. The V2 bases shown in the
figures maximally respond to acute angles (left), obtuse angles (middle), and tri-stars and junctions (right).
5.2 Complex shaped model V2 neurons
Our second experiment represents a comparison to a subset of the results described in Hegde and van
Essen [23]. We generated a stimulus set comprising some [23]?s complex shaped stimuli: angles,
single bars, tri-stars (three line segments that meet at a point), and arcs/circles, and measured the
response of the second layer of our sparse RBM model to these stimuli.11 We observe that many V2
bases are activated mainly by one of these different stimulus classes. For example, some model V2
neurons activate maximally to single bars; some maximally activate to (acute or obtuse) angles; and
others to tri-stars (see Figure 7). Further, the number of V2 bases that are maximally activated by
acute angles is significantly larger than the number of obtuse angles, and the number of V2 bases
that respond maximally to the tri-stars was much smaller than both preceding cases. This is also
consistent with the results described in [23].
6
Conclusions
We presented a sparse variant of the deep belief network model. When trained on natural images,
this model learns local, oriented, edge filters in the first layer. More interestingly, the second layer
captures a variety of both colinear (?contour?) features as well as corners and junctions, that in a
quantitative comparison to measurements of V2 taken by Ito & Komatsu, appeared to give responses
that were similar along several dimensions. This by no means indicates that the cortex is a sparse
RBM, but perhaps is more suggestive of contours, corners and junctions being fundamental to the
statistics of natural images.12 Nonetheless, we believe that these results also suggest that sparse
11
All the stimuli were 14-by-14 pixel image patches. We applied the protocol described in Section 5.1 to the
stimulus data, to compute the model V1 and V2 responses.
12
In preliminary experiments, we also found that when these ideas are applied to self-taught learning [26] (in
which one may use unlabeled data to identify features that are then useful for some supervised learning task),
using a two-layer sparse RBM usually results in significantly better features for object recognition than using
only a one-layer network.
7
deep learning algorithms, such as our sparse variant of deep belief nets, hold promise for modeling
higher-order features such as might be computed in the ventral visual pathway in the cortex.
Acknowledgments
We give warm thanks to Minami Ito, Geoffrey Hinton, Chris Williams, Rajat Raina, Narut Sereewattanawoot, and Austin Shoemaker for helpful discussions. Support from the Office of Naval Research
under MURI N000140710747 is gratefully acknowledged.
References
[1] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[2] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an
energy-based model. In NIPS, 2006.
[3] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
NIPS, 2006.
[4] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep
architectures on problems with many factors of variation. In ICML, 2007.
[5] G. E. Hinton, S. Osindero, and K. Bao. Learning causally linked MRFs. In AISTATS, 2005.
[6] S. Osindero, M. Welling, and G. E. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18:381?344, 2006.
[7] M. Ito and H. Komatsu. Representation of angles embedded within contour stimuli in area v2 of macaque
monkeys. The Journal of Neuroscience, 24(13):3313?3324, 2004.
[8] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc.R.Soc.Lond. B, 265:359?366, 1998.
[9] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Research, 37(23):3327?3338, 1997.
[10] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[11] H. Lee, , A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In NIPS, 2007.
[12] D. Hubel and T. Wiesel. Receptive fields and functional architecture of monkey striate cortex. Journal of
Physiology, 195:215?243, 1968.
[13] R. L. DeValois, E. W. Yund, and N. Hepler. The orientation and direction selectivity of cells in macaque
visual cortex. Vision Res., 22:531?544, 1982a.
[14] H. B. Barlow. The coding of sensory messages. Current Problems in Animal Behavior, 1961.
[15] P. O. Hoyer and A. Hyvarinen. A multi-layer sparse coding network learns contour coding from natural
images. Vision Research, 42(12):1593?1605, 2002.
[16] Y. Karklin and M. S. Lewicki. A hierarchical bayesian model for learning non-linear statistical regularities
in non-stationary natural signals. Neural Computation, 17(2):397?423, 2005.
[17] A. Hyvarinen and P. O. Hoyer. Emergence of phase and shift invariant features by decomposition of
natural images into independent feature subspaces. Neural Computation, 12(7):1705?1720, 2000.
[18] A. Hyv?arinen, P. O. Hoyer, and M. O. Inki. Topographic independent component analysis. Neural
Computation, 13(7):1527?1558, 2001.
[19] A. Hyvarinen, M. Gutmann, and P. O. Hoyer. Statistical model of natural stimuli predicts edge-like
pooling of spatial frequency channels in v2. BMC Neuroscience, 6:12, 2005.
[20] L. Wiskott and T. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715?770, 2002.
[21] G. Boynton and J. Hegde. Visual cortex: The continuing puzzle of area v2. Current Biology,
14(13):R523?R524, 2004.
[22] J. B. Levitt, D. C. Kiper, and J. A. Movshon. Receptive fields and functional architecture of macaque v2.
Journal of Neurophysiology, 71(6):2517?2542, 1994.
[23] J. Hegde and D.C. Van Essen. Selectivity for complex shapes in primate visual area v2. Journal of
Neuroscience, 20:RC61?66, 2000.
[24] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[25] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14:1771?1800, 2002.
[26] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from
unlabeled data. In ICML, 2007.
8
| 3313 |@word neurophysiology:1 trial:1 version:1 middle:1 wiesel:1 seems:1 replicate:1 hyv:1 decomposition:1 contrastive:4 pick:1 interestingly:2 current:2 com:1 activation:6 visible:8 subsequent:1 additive:1 shape:2 treating:1 plot:1 update:4 stationary:1 greedy:4 characterization:1 node:1 simpler:1 five:2 along:7 pathway:2 manner:1 mask:1 indeed:3 expected:1 behavior:1 ica:3 frequently:1 roughly:1 multi:1 salakhutdinov:1 little:1 begin:1 what:3 interpreted:1 monkey:2 developed:1 finding:1 quantitative:5 exactly:1 scaled:2 control:1 unit:23 producing:1 causally:1 before:1 positive:2 local:1 modify:1 encoding:1 meet:1 firing:1 interpolation:1 approximately:2 black:1 might:1 studied:1 suggests:1 co:1 range:3 averaged:1 acknowledgment:1 lecun:2 komatsu:14 practice:1 digit:3 procedure:3 area:24 empirical:1 bell:1 gabor:5 significantly:3 projection:1 physiology:1 word:1 regular:3 suggest:3 get:1 unlabeled:3 layered:1 applying:2 hegde:4 courtesy:1 center:1 resembling:1 straightforward:1 williams:1 starting:1 formulate:1 identifying:1 rule:2 lamblin:1 population:1 x14:2 variation:1 hierarchy:3 elucidate:1 play:1 suppose:1 colinearity:1 controlling:1 caption:1 hypothesis:1 element:1 crossing:1 expensive:1 recognition:1 updating:1 muri:1 predicts:1 bottom:3 observed:2 role:1 capture:1 ensures:1 connected:1 ranzato:2 gutmann:1 rescaled:1 intuition:1 trained:7 raise:1 colinear:2 segment:1 serve:2 upon:1 efficiency:2 basis:6 triangle:1 easily:1 various:3 represented:1 train:1 fast:1 describe:1 activate:3 sejnowski:2 quite:1 encoded:2 stanford:3 larger:2 valued:5 loglikelihood:1 statistic:7 topographic:2 highlighted:1 emergence:2 sequence:1 frozen:1 net:5 reconstruction:1 maximal:1 product:2 neighboring:1 aligned:1 date:1 translate:1 normalize:1 bao:1 convergence:1 regularity:1 extending:2 generating:2 object:1 andrew:1 develop:2 pose:1 axial:1 measured:1 grating:1 soc:1 c:1 resemble:2 larochelle:2 direction:3 posit:1 filter:11 stochastic:1 centered:1 enable:1 translating:1 bin:1 arinen:1 preliminary:1 biological:4 minami:1 extension:1 hold:3 normal:1 puzzle:1 bj:9 ventral:2 angled:1 early:2 hvi:4 estimation:1 proc:1 sensitive:1 faithfully:2 successfully:1 weighted:3 always:1 gaussian:1 modified:1 pn:1 hj:12 varying:1 office:1 encode:4 ax:3 naval:1 likelihood:5 indicates:2 mainly:1 contrast:2 greedily:1 helpful:1 dependent:1 mrfs:1 integrated:1 entire:1 eliminate:1 hidden:14 yund:1 selective:4 wij:10 comprising:2 mimicking:1 pixel:3 fidelity:1 orientation:10 html:1 overall:1 animal:1 spatial:1 fairly:3 schaaf:3 field:14 once:1 comprise:1 having:1 ng:3 elongation:5 sampling:1 shaped:2 represents:4 bmc:1 biology:1 unsupervised:2 nearly:1 icml:2 mimic:2 report:1 stimulus:43 quantitatively:3 serious:1 few:4 others:1 oriented:9 randomly:1 packer:1 divergence:4 densely:1 individual:2 phase:1 consisting:1 fire:1 hepler:1 attempt:4 organization:2 interest:1 message:1 highly:4 essen:3 evaluation:2 nl:1 yielding:1 activated:4 bicubic:1 edge:13 necessary:1 improbable:1 obtuse:3 iv:1 continuing:1 penalizes:1 chaitanya:1 circle:2 re:1 rotating:1 uncertain:1 column:1 modeling:2 kiper:1 stacking:1 ekanadham:1 deviation:2 subset:3 comprised:1 osindero:3 optimally:1 reported:3 thanks:1 density:1 peak:11 fundamental:1 lee:3 v4:2 receiving:1 probabilistic:1 corrects:1 together:2 again:1 imlib:1 corner:6 worse:1 expert:1 account:1 exclude:1 star:6 coding:9 summarized:2 bergstra:1 vi:8 performed:1 try:2 linked:1 elicited:2 minimize:3 square:2 responded:3 became:1 efficiently:1 ensemble:1 correspond:1 hbj:2 identify:3 handwritten:3 bayesian:1 stroke:2 strongest:2 phys:1 energy:8 rbms:3 frequency:2 nonetheless:1 rbm:22 sampled:1 dataset:3 knowledge:1 dimensionality:1 appears:1 feed:1 higher:4 supervised:1 response:35 maximally:10 arranged:1 done:1 strongly:4 angular:1 correlation:1 autoencoders:2 until:1 horizontal:3 nonlinear:1 logistic:3 indicated:2 perhaps:1 believe:1 olshausen:2 building:1 concept:2 contain:1 normalized:1 requiring:1 former:1 regularization:8 inspiration:1 barlow:1 symmetric:1 visualizing:1 width:6 self:2 leftmost:2 exdb:1 demonstrate:1 image:32 variational:1 consideration:1 ranging:1 recently:1 wise:1 inki:1 rotation:1 functional:2 insensitive:1 significant:2 measurement:5 honglak:1 gibbs:1 tuning:1 grid:1 pm:2 similarly:2 dbn:5 gratefully:1 had:3 cortex:11 compiled:1 whitening:2 etc:1 add:1 base:24 dominant:2 acute:3 own:1 showed:3 optimizing:1 selectivity:6 certain:2 binary:3 der:3 devalois:1 seen:1 fortunately:1 additional:1 rug:1 preceding:1 determine:2 maximize:1 signal:1 ii:1 desirable:1 match:4 long:2 variant:5 vision:3 expectation:2 cochlea:1 iteration:2 represent:5 normalization:1 cell:22 addition:3 want:2 whereas:2 interval:1 biased:1 unlike:1 exhibited:1 tri:5 pooling:1 chaitu:1 tend:1 undirected:1 spirit:1 seem:2 near:1 chopra:1 bengio:3 unlocalized:1 iii:1 variety:1 architecture:4 identified:1 reduce:1 idea:1 shift:1 whether:1 motivated:2 pca:1 movshon:1 cause:1 repeatedly:1 deep:21 useful:2 clear:1 informally:3 detailed:1 amount:1 repeating:1 ang:1 documented:2 http:2 inhibitory:1 estimated:1 neuroscience:3 blue:1 write:1 promise:3 taught:2 group:3 key:1 four:3 shoemaker:1 drawn:2 acknowledged:1 idata:3 kept:1 v1:23 nonhierarchical:1 fraction:1 year:1 sum:2 angle:43 respond:6 ameliorate:1 place:1 arrive:1 yann:1 patch:10 scaling:1 layer:41 bound:1 followed:1 courville:1 yielded:2 constraint:1 scene:2 encodes:1 diffuse:1 generates:1 lond:1 relatively:1 format:1 department:1 combination:2 battle:2 smaller:1 making:2 primate:1 restricted:3 invariant:1 taken:5 equation:1 visualization:8 remains:4 previously:1 describing:1 adopted:1 junction:7 multiplied:1 apply:2 observe:1 hierarchical:10 v2:53 batch:3 responding:1 top:7 graphical:2 build:2 classical:1 eliciting:2 move:3 objective:2 arrangement:1 receptive:12 primary:3 dependence:1 striate:1 diagonal:3 responds:2 hoyer:4 gradient:9 darkened:2 subspace:1 separate:1 decoder:1 chris:1 seven:1 extent:1 length:1 code:1 index:1 mini:3 minimizing:1 relate:1 holding:1 negative:3 boltzmann:3 unknown:1 teh:1 upper:1 vertical:3 neuron:35 arc:2 descent:3 sparsified:1 hinton:12 perturbation:1 varied:1 inferred:2 namely:1 pair:2 connection:4 learned:14 macaque:4 nip:3 able:4 bar:6 below:2 usually:1 appeared:1 sparsity:1 poultney:1 hlab:1 green:1 belief:12 natural:21 irecon:4 warm:1 karklin:1 raina:3 axis:9 created:1 prior:1 literature:1 circled:1 popovici:1 embedded:1 topography:1 geoffrey:1 localized:2 downloaded:1 degree:6 consistent:4 wiskott:1 principle:1 boynton:1 heavy:1 translation:1 row:5 austin:1 excitatory:1 summary:1 accounted:1 repeat:1 last:2 bias:1 deeper:3 characterizing:1 taking:1 sparse:47 van:11 tolerance:5 distributed:1 dimension:3 cortical:5 contour:8 sensory:1 author:6 made:4 far:1 erhan:1 welling:1 hyvarinen:3 suggestive:2 hubel:1 iterative:1 latent:1 tailed:1 n000140710747:1 nature:1 channel:2 learn:14 ca:1 composing:1 transfer:1 investigated:1 complex:10 protocol:3 did:1 aistats:1 multilayered:1 profile:11 levitt:1 representative:1 slow:1 position:3 bandpass:1 ito:15 learns:2 down:1 specific:2 mnist:3 ci:8 sparseness:2 intersection:1 simply:1 visual:17 kxk:1 lewicki:1 corresponds:2 conditional:2 viewed:1 goal:1 change:1 specifically:4 typical:2 except:1 reducing:1 principal:1 called:1 secondary:2 invariance:1 experimental:1 indicating:3 formally:4 support:1 latter:1 rajat:1 hateren:5 evaluate:2 |
2,552 | 3,314 | Classification via Minimum Incremental Coding
Length (MICL)
John Wright?, Yi Ma
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
{jnwright,yima}@uiuc.edu
Yangyu Tao, Zhouchen Lin, Heung-Yeung Shum
Visual Computing Group
Microsoft Research Asia
{v-yatao,zhoulin,hshum}@microsoft.com
Abstract
We present a simple new criterion for classification, based on principles from lossy
data compression. The criterion assigns a test sample to the class that uses the minimum number of additional bits to code the test sample, subject to an allowable
distortion. We prove asymptotic optimality of this criterion for Gaussian data and
analyze its relationships to classical classifiers. Theoretical results provide new
insights into relationships among popular classifiers such as MAP and RDA, as
well as unsupervised clustering methods based on lossy compression [13]. Minimizing the lossy coding length induces a regularization effect which stabilizes the
(implicit) density estimate in a small-sample setting. Compression also provides
a uniform means of handling classes of varying dimension. This simple classification criterion and its kernel and local versions perform competitively against
existing classifiers on both synthetic examples and real imagery data such as handwritten digits and human faces, without requiring domain-specific information.
1
Introduction
One quintessential problem in statistical learning [9, 20] is to construct a classifier from labeled
training data (xi , yi ) ?iid pX,Y (x, y). Here, xi ? Rn is the observation, and yi ? {1, . . . , K} its
associated class label. The goal is to construct a classifier g : Rn ? {1, . . . , K} which minimizes
the expected risk (or probability of error): g ? = arg min E[Ig(X)6=Y ], where the expectation is taken
with respect to pX,Y . When the conditional class distributions pX|Y (x|y) and the class priors pY (y)
are given, the maximum a posterior (MAP) assignment
y?(x) = arg miny?{1,...,K} ? ln pX|Y (x|y) ? ln pY (y)
(1)
gives the optimal classifier. This amounts to a minimum coding length principle: the optimal classifier minimizes the Shannon optimal (lossless) coding length of the test data x with respect to
the distribution of the true class. The first term is the number of bits needed to code x w.r.t. the
distribution of class y, and the second term is the number of bits needed to code the label y for x.
Issues with Learning the Distributions from Training Samples. In the typical classification
setting, the distributions pX|Y (x|y) and pY (y) need to be learned from a set of labeled training
?
The authors gratefully acknowledge support from grants NSF Career IIS-0347456, NSF CRS-EHS0509151, NSF CCF-TF-0514955, and ONR YIP N00014-05-1-0633.
1
data. Conventional approaches to model estimation (implicitly) assume that the distributions are
nondegenerate and the samples are sufficiently dense. However, these assumptions fail in many
classification problems which are vital for applications in computer vision [10,11]. For instance, the
set of images of a human face taken from different angles and under different lighting conditions
often lie in a low-dimensional subspace or submanifold of the ambient space [2]. As a result, the associated distributions are degenerate or nearly degenerate. Moreover, due to the high dimensionality
of imagery data, the set of training images is typically sparse.
Inferring the generating probability distribution pX,Y from a sparse set of samples is an inherently
ill-conditioned problem [20]. Furthermore, in the case of degenerate distributions, the classical
likelihood function (1) does not have a well-defined maximum [20]. Thus, to infer the distribution
from the training data or to use it to classify new observations, the distribution or its likelihood
function needs to be properly ?regularized.? Typically, this is accomplished either explicitly via
smoothness constraints, or implicitly via parametric assumptions on the distribution [3]. However,
even if the distributions are assumed to be generic Gaussians, explicit regularization is still necessary
to achieve good small-sample performance [6].
In many real problems in computer vision, the distributions associated with different classes of data
have different model complexity. For instance, when detecting a face in an image, features associated
with the face often have a low-dimensional structure which is ?embedded? as a submanifold in a
cloud of essentially random features from the background. Model selection criteria such as minimum
description length (MDL) [12, 16] serve as important modifications to MAP for model estimation
across classes of different complexity. It selects the model that minimizes the overall coding length
of the given (training) data, hence the name ?minimum description length? [1]. Notice, however, that
MDL does not specify how the model complexity should be properly accounted for when classifying
new test data among models that have different dimensions.
Solution from Lossy Data Coding. Given the difficulty of learning the (potentially degenerate)
distributions pX|Y (x|y) from a few samples in a high-dimensional space, it makes more sense to
seek good ?surrogates? for implementing the minimum coding length principle (1). Our idea is to
measure how efficiently a new observation can be encoded by each class of the training data subject
to an allowable distortion, and to assign the new observation to the class that requires the minimum
number of additional bits. We dub this criterion ?minimum incremental coding length? (MICL) for
classification. It provides a counterpart of the MDL principle for model estimation and as a surrogate
for the minimum coding length principle for classification.
The proposed MICL criterion naturally addresses the issues of regularization and model complexity.
Regularization is introduced through the use of lossy coding, i.e. coding the test data x upto an
allowable distortion1 (placing our approach along the lines of lossy MDL [15]). This contrasts with
Shannon?s optimal lossless coding length, which requires precise knowledge of the true distributions.
Lossy coding length also accounts for model complexity by directly measuring the difference in the
volume (hence dimension) of the training data with and without the new observation.
Relationships to Existing Classifiers. While MICL and MDL both minimize a coding-theoretic
objective, MICL differs strongly from traditional MDL approaches to classification such as those
proven inconsistent in [8]. Those methods chose a decision boundary that minimizes the total number of bits needed to code the boundary and the samples it incorrectly classifies. In contrast, MICL
uses coding length directly as a measure of how well the training data represent the new sample.
The inconsistency result of [8] does not apply in this modified context. Within the lossy data coding framework, we establish that the MICL criterion leads to a family of classifiers that generalize
the conventional MAP classifier (1). We prove that for Gaussian distributions, the MICL criterion
asymptotically converges to a regularized version of MAP2 (see Theorem 1) and give a precise estimate of the convergence rate (see Theorem 2). Thus, lossy coding induces a regularization effect
similar to Regularized Discriminant Analysis (RDA) [6], with similar gains in finite sample performance with respect to MAP/QDA. The fully Bayesian approach to model estimation, in which
posterior distributions over model parameters are estimated also provides finite sample gains over
1
Information Bottleneck also uses lossy coding, but in an unsupervised manner, for clustering, feature
selection and dimensionality reduction [19]. We apply lossy coding in the supervised (classification) setting.
2
MAP subject to a Gaussian assumption is often referred to as Quadratic Discriminant Analysis (QDA) [9].
2
ML/MAP [14]. However, that method is sensitive to the choice of prior when the number of samples
is less than the dimension of the space, a situation that poses no difficulty to our proposed classifier.
When the distributions involved are not Gaussian, the MICL criterion can still be applied locally,
similar to the popular k-Nearest Neighbor (k-NN) classifier. However, the local MICL classifier significantly improves the k-NN classifier as it accounts for both the number of samples and
the distribution of the samples within the neighborhood. MICL can also be kernelized to handle
nonlinear/non-Gaussian data, an extension similar to the generalization of Support Vector Machines
(SVM) to nonlinear decision boundaries. The kernelized version of MICL provides a simple alternative to the SVM approach of constructing a linear decision boundary in the embedded (kernel)
space, and better exploits the covariance structure of the embedded data.
2
2.1
Classification Criterion and Analysis
Minimum Incremental Coding Length.
A lossy coding scheme [5] maps vectors X = (x1 , . . . , xm ) ? Rn?m to a sequence of binary bits,
from which the original vectors can be recovered upto an allowable distortion E[k?
x ? xk2 ] ? ?2 .
n?m
The length of the bit sequence is then a function L? (X ) : R
? Z+ . If we encode each class
.
of training data Xj = {xi : yi = j} separately using L? (Xj ) bits, the entire training dataset can be
PK
represented by a two-part code using j=1 L? (Xj ) ? |Xj | log2 pY (j) bits. Here, the second term is
the minimum number of bits needed to (losslessly) code the class labels yi .
Now, suppose we are given a test observation x ? Rn , whose associated class label y(x) = j is
unknown. If we code x jointly with the training data Xj of the jth class, the number of additional
bits needed to code the pair (x, y) is ?L? (x, j) = L? (Xj ?{x})?L? (Xj )+L(j). Here, the first two
terms measure the excess bits needed to code (x, Xj ) upto distortion ?2 , while the last term L(j)
is the cost of losslessly coding the label y(x) = j. One may view these as ?finite-sample lossy?
surrogates for the Shannon coding lengths in the ideal classifier (1). This interpretation naturally
leads to the following classifier:
Criterion 1 (Minimum Incremental Coding Length). Assign x to the class which minimizes the
number of additional bits needed to code (x, y?), subject to the distortion ?:
.
y?(x) = arg minj?{1,...,K} ?L? (x, j).
(2)
The above criterion (2) can be taken as a general principle for classification, in the sense that it can be
applied using any lossy coding scheme. Nevertheless, effective classification demands that the chosen coding scheme be approximately optimal for the given data. From a finite sample perspective,
L? should approximate the Kolmogorov complexity of X , while in an asymptotic, statistical setting
it should approach the lower bound given by the rate-distortion of the generating distribution [5].
Lossy Coding of Gaussian Data. We will first consider a coding length function L? introduced
and rigorously justified in [13], which is (asymptotically) optimal for Gaussians. The (implicit) use
of a coding scheme which is optimal for Gaussian sources is equivalent to assuming that the conditional class distributions pX|Y can be well-approximated by Gaussians. After rigorously analyzing
this admittedly restrictive scenario, we will extend the MICL classifier (with this same L? function)
to arbitrary, multimodal distributions via an effective local Gaussian approximation.
For a multivariate Gaussian source N (?, ?), the average number of bits
needed to code a vector
.
subject to a distortion ?2 is approximately R? (?) = 12 log2 det I+ ?n2 ? (bits/vector). Observations
P
P
1
1
?
? = m
X = (x1 , . . . , xm ) with sample mean ?
i xi and covariance ?(X ) = m?1
i (xi ?
T
2
?
?
? can be represented upto expected distortion ? using ? mR? (?) bits. The optimal
?)(x
i ? ?)
codebook is adaptive to the data, and can be encoded by representing the principal axes of the
? bits. Encoding the mean vector ? requires an additional
covariance using an additional nR? (?)
T
?
?
?
?
n
bits. The total number of bits required to code X is therefore
2 log2 1 + ?2
n
?T ?
?
n?
?
. m+n
L? (X ) =
log2 det I + 2 ?(X
) + log2 1 + 2 .
2
?
2
?
3
(3)
MICL
k-NN
SVM-RBF
Figure 1: MICL harnesses linear structure in the data to interpolate (left) and extrapolate (center) in
sparsely sampled regions. Popular classifiers such as k-NN and SVM-RBF do not (right).
The first term gives the number of bits needed to represent the distribution of the xi about their mean,
and the second gives the cost of representing the mean. The above function well-approximates the
optimal coding length for Gaussian data, and has also been shown to give a good upper bound on the
number of bits needed to code finitely many samples lying on a linear subspace (e.g., a degenerate
Gaussian distribution) [13].
Coding the Class Label. Since the label Y is discrete, it can be coded losslessly. If the test class
labels Y are known to have the marginal distribution P [Y = j] = ?j , then the optimal coding
lengths are (within one bit): L(j) = ? log2 ?j . In practice, we may replace ?j with the estimate
?
?j = |Xj |/m. Notice that as in the MAP classifier, the ?j essentially form a prior on class labels.
Combining this coding length the class label with the coding length function (3) for the observations,
we summarize the MICL criterion (2) as Algorithm 1 below:
Algorithm 1 (MICL Classifier).
1: Input: m training samples partitioned into K classes X1 , X2 , . . . , XK and a test sample x.
2: Compute prior distribution of class labels ?
?j = |Xj |/m.
3: Compute incremental coding length of x for each class:
?L? (x, j) = L? (Xj ? {x}) ? L? (Xj ) ? log2 ?
?j ,
. m+n
? ) + n log2 1 + ?? T2?? .
where
L? (X ) = 2 log2 det I + ?n2?(X
2
?
4: Output: y?(x) = arg minj=1,...,K ?L? (x, j).
The L? (Xj ?{x}) can be computed in O(min(m, n)2 ) time (see [21]), allowing the MICL classifier
to be directly applied to high-dimensional data. Figure 1 shows the performance of Algorithm 1 on
two toy problems. In both cases, the MICL criterion harnesses the covariance structure of the data
to achieve good classification in sparsely sampled regions. In the left example, the criterion interpolates the data structure to achieve correct classification, even near the origin where the samples
are sparse. In the right example, the criterion extrapolates the horizontal line to the other side of the
plane. Methods such as k-NN and SVM do not achieve the same effect. Notice, however, that these
decision boundaries are similar to what MAP/QDA would give. This raises an important question:
what is the precise relationship between MICL and MAP, and when is MICL superior?
2.2
Asymptotic Behavior and Relationship to MAP
In this section, we analyze the asymptotic behavior of Algorithm 1 as the number of training samples
goes to infinity. The following result, whose proof is given in [21], indicates that MICL converges
to a regularized version of ML/MAP, subject to a reward on the dimension of the classes:
Theorem 1 (Asymptotic MICL [21]). Let the training samples {(xi , yi )}m
i=1 ?iid pX,Y (x, y), with
.
.
?j = E[X|Y = j], ?j = Cov(X|Y = j). Then as m ? ?, the MICL criterion coincides
(asymptotically, with probability one) with the decision rule
1
?2
(4)
y?(x) = argmax LG x ?j , ?j + I + ln ?j + D? (?j ),
j=1,...,K
n
2
.
where LG (?| ?, ?) is the log-likelihood function for a N (?, ?) distribution , and D? (?j ) =
2
tr(?j (?j + ?n I)?1 ) is the effective dimension of the j-th model, relative to the distortion ?2 .
4
R
MICL
MAP
0.06
63
0.04
51
0.02
39
0
27
?0.02
15
3
?2
?0.04
?1
0
log ?
1
2
?R
R
MICL
MAP
0.06
63
0.04
51
39
0.02
27
15
3
?2
0
?1
0
log ?
1
?R
R
MICL
RDA
50
0.8
40
0.6
30
0.4
20
0.2
10
10
22
34
Ambient dimension
0
Number of training samples
?R
Number of training samples
Number of training samples
MAP
Number of training samples
R
75
?R
MICL
50
0.2
40
0.15
30
0.1
0.05
20
0
10
?0.05
10
22
34
Ambient dimension
Figure 2: Left: Excess risk incurred by using MAP rather than MICL, as a function of ? and m. (a)
isotropic Gaussians. (b) anisotropic Gaussians. Right: Excess risk for nested classes, as a function
of n and m. (c) MICL vs. MAP. (d) MICL vs. RDA. In all examples, MICL is superior for n m.
This result shows that asymptotically, MICL generates a family of MAP-like classifiers parametrized
by the distortion ?2 . If all of the distributions are nondegenerate (i.e. their covariance matrices ?j
2
are nonsingular), then lim??0 (?j + ?n I) = ?j and lim??0 D? (?j ) = n, a constant across the
various classes. Thus, for nondegenerate data, the family of classifiers induced by MICL contains
the conventional MAP classifier (1) at ? = 0. Given a finite number, m, of samples, any reasonable
rule for choosing the distortion ?2 should therefore ensure that ? ? 0 as m ? ?. This guarantees
that for non-degenerate distributions, MICL converges to the asymptotically optimal MAP criterion.
Simulations (e.g., Figure 1) suggest that the limiting behavior provides useful information even
for finite training data. The following result, proven in [21], verifies that the MICL discriminant
functions ?L? (x, j) converge quickly to their limiting form ?L?
? (x, j):
Theorem 2 (MICL Convergence Rate [21]). As the number of samples, m ? ?, the MICL criterion
1
(2) converges
to its asymptotic form,
(4) at a rate of m? 2 . More specifically, with probability at least
1
?
?
1 ? ?, ?L? (z, j) ? ?L? (z, j) ? c(?) ? m 2 for some constant c(?) > 0.
2.3
Improvements over MAP
In the above, we have established the fact that asymptotically, the MICL criterion (4) is just as good
as the MAP criterion. Nevertheless, the MICL criterion makes several important modifications to
MAP, which significantly improve its performance on sparsely sampled or degenerate data.
Regularization and Finite-Sample Behavior. Notice that the first two terms of the asymptotic
2
MICL criterion (4) have the form of a MAP criterion, based on an N (?, ? + ?n I) distribution.
2
This is somewhat equivalent to softening the distribution by ?n along each dimension, and has two
important effects. First, it renders the associated MAP decision rule well-defined, even if the true
data distribution is (almost) degenerate. Even for non-degenerate distributions, there is empirical
? + ?2 I gives more stable finite-sample classification [6].
evidence that for appropriately chosen ?, ?
n
Figure 2 demonstrates this effect on two simple examples. The generating distributions are parameterized as (a) ?1 = [? 12 , 0], ?2 = [ 12 , 0], ?1 = ?2 = I, and (b) ?1 = [? 43 , 0], ?2 = [ 34 , 0],
?1 = diag(1, 4), ?2 = diag(4, 1). In each example, we vary the number of training samples, m,
and the distortion ?. For each (m, ?) combination, we draw m training samples from two Gaussian distributions N (?i , ?i ), i = 1, 2, and estimate the Bayes risk of the resulting MICL and MAP
classifiers. This procedure is repeated 500 times, to estimate the overall Bayes risk with respect to
variations in the training data. Figure 2 visualizes the difference in risks, RM AP ? RM ICL . Positive values indicate that MICL is outperforming MAP. The red line approximates the zero level-set,
where the two methods perform equally well. In the isotropic case (a), MICL outperforms MAP for
all sufficiently large ?. with a larger performance gain when the number of samples is small. In the
anisotropic case (b), for most ?, MICL dramatically outperforms MAP for small sample sizes. We
will see in the next example that this effect becomes more pronounced as the dimension increases.
Dimension Reward. The effective dimension term D? (?j ) in the large-n MICL criterion (4) can
Pn
2
be rewritten as D? (?j ) = i=1 ?i /( ?n + ?i ), where ?i is the ith eigenvalue of ?j . If the data lie
near a d-dimensional subspace (?1 . . . ?d ?2 /n and ?d+1 . . . ?n ?2 /n), D? ? d. In general,
5
D? can be viewed as ?softened? estimate of the dimension3 , relative to the distortion ?2 . MICL
therefore rewards distributions that have relatively higher dimension.4 However, this effect is somewhat countered by the regularization induced by ?, which rewards lower dimensional distributions.
Figure 2(right) empirically compares MICL to the conventional MAP and the regularized MAP (or
RDA [6]). We draw m samples from three nested Gaussian distributions: one of full rank n, one of
rank n/2, and one of rank 1. All samples are corrupted by 4% Gaussian noise. We estimate the Bayes
risk for each (m, n) combination as in the previous example. The regularization parameter in RDA
and the distortion ? for MICL are chosen independently for each trial by cross validation. Plotted
are the (estimated) differences in risk, RM AP ? RM ICL (Fig. 2 (c)) and RRDA ? RM ICL (Fig. 2
(d)). The red lines again correspond to the zero level-set of the difference. Unsurprisingly, MICL
outperforms MAP for most (m, n), and that the effect is most pronounced when n is large and m is
small. When m is much smaller than n (e.g. the bottom row of Figure 2 right), MICL demonstrates
a significant performance gain with respect to RDA. As the number of samples increases, there is a
region where RDA is slightly better. For most (m, n), MICL and RDA are close in performance.
2.4
Extensions to Non-Gaussian Data
In practice, the data distribution(s) of interest may not be Gaussian. If the rate-distortion function is
known, one could, in principle, carry out similar analysis as for the Gaussian case. Nevertheless, in
this subsection, we discuss two practical modifications to the MICL criterion that are applicable to
arbitrary distributions and preserve the desirable properties discussed in the previous subsections.
Kernel MICL Criterion. Since X X T and X T X have the same non-zero eigenvalues,
log2 det I +? X X T = log2 det I +? X T X .
(5)
This identity shows that L? (X ) can also be computed from the inner products between the xi . If the
data x (of each class) are not Gaussian but there exists a nonlinear map ? : Rn ? H such that the
transformed data ?(x) are (approximately) Gaussian, we can replace the inner product xT1 x2 with
.
a symmetric positive definite kernel function k(x1 , x2 ) = ?(x1 )T ?(x2 ). Choosing a proper kernel
function will improve classification performance for non-Gaussian distributions. In practice, popular
choices include the polynomial kernel k(x1 , x2 ) = (xT1 x2 + 1)d , the radial basis function (RBF)
kernel k(x1 , x2 ) = exp(??kx1 ? x2 k2 ) and their variants. Implementation details, including how
to properly account for the mean and dimension of the embedded data, are given in [21].
A similar transformation is used to generate nonlinear decision boundaries with SVM. Notice, however, that whereas SVM constructs a linear decision boundary in the lifted space H, kernel MICL
exploits the covariance structure of the lifted data, generating decision boundaries that are (asymptotically) quadratic. In Section 3 we will see that even for real data whose statistical nature is unclear,
kernel MICL outperforms SVM when applied with the same kernel function.
Local MICL Criterion. For real data whose distribution is unknown, it may be difficult to find an
appropriate kernel function. In this case, MICL can still be applied locally, in a neighborhood of the
test sample x. Let N k (x) denote the k nearest neighbors of x in the training set X . Training data in
.
this neighborhood that belong to each class are Njk (x) = Xj ? N k (x), j = 1, . . . , K. In the MICL
classifier (Algorithm 1), we replace the incremental coding length ?L? (x, j) by its local version:
?L? (x, j) = L? (Njk (x) ? {x}) ? L? (Njk (x)) + L(j),
(6)
with L(j) = ? log2 (|Njk (x)|/|N k (x)|). Theorem 1 implies that this gives a universal classifier:
Corollary 3. Suppose the conditional density pj (x) = p(x|y = j) of each class is nondegenerate.
Then if k = o(m) and k, m ? ?, the local MICL criterion converges to the MAP criterion (1).
This follows, since as the radius of the neighborhood shrinks, the cost of coding the class label,
? log2 (|Njk (x)|/|N k (x)|) ? ? log2 pj (x), dominates the coding length, (6). In this asymptotic
setting the local MICL criterion behaves like k-Nearest Neighbor (k-NN). However, the finitesample behavior of the local MICL criterion can differ drastically from that of k-NN, especially
3
4
This quantity has been dubbed the effective number of parameters in the context of ridge regression [9].
This contrasts with the dimension penalties typical in model selection/estimation.
6
(a) KMICL-RBF
(c) LMICL
(b) SVM-RBF
(d) 5-NN
Figure 3: Nonlinear extensions to MICL, compared to SVM and k-NN. Local MICL produces a
smoother and more intuitive decision boundary than k-NN. Kernel MICL and SVM produce similar
boundaries, that are smoother and better respect the data structure than those given by local methods.
Method
LMICL
k-NN
Error
1.6%
3.1%
Method
SVM-Poly [20]
Best [18]
Error
1.4%
0.4%
Method
LMICL
k-NN
Error
4.9%
5.3%
Method
KMICL-Poly
SVM-Poly [4]
Error
4.7%
5.3%
Table 1: Results for handwritten digit recognition. Left: MNIST dataset. Right: USPS dataset, with
identical preprocessing and kernel function. Here, kernel-MICL slightly outperforms SVM.
when the samples are sparse and the distributions involved are almost degenerate. In this case, from
(4), local MICL effectively approximates the local shape of the distribution pj (x) by a (regularized)
Gaussian, exploiting structure in the distribution of the nearest neighbors (see figure 3).
3
Experiments with Real Imagery Data
Using experiments on real data, we demonstrate that MICL and its nonlinear variants approach the
best results from more sophisticated systems, without relying on domain-specific information.
Handwritten Digit Recognition. We first test the MICL classifier on two standard datasets for
handwritten digit recognition (Table 1 top). The MNIST handwritten digit dataset [10] consists of
60,000 training images and 10,000 test images. We achieved better results using the local version
of MICL, due to non-Gaussian distribution of the data. With k = 20 and ? = 150, local MICL
achieves a test error 1.59%, outperforming simple methods such as k-NN as well as many more
complicated neural network approaches (e.g. LeNet-1 [10]). MICL?s error rate approaches the
best result for a generic learning machine (1.4% error for SVM with a degree-4 polynomial kernel).
Problem-specific approaches have resulted in lower error rates, however, with the best reported result
achieved using a specially engineered neural network [18].
We also test on the challenging USPS digits database (Table 1 bottom). Here, even humans have
considerable difficulties (? 2.5% error). With k = 35 and ? = 0.03, local MICL achieves an error
rate of 4.88%, again outperforming k-NN. We further compare the performance of kernel MICL
to SVM (using [4]) on this dataset with the same homogeneous, degree 3 polynomial kernel, and
identical preprocessing (normalization and centering), allowing us to compare pure classification
performace. Here, SVM achieves a 5.3% error, while kernel-MICL achieves an error rate of 4.7%
with distortion ? = 0.0067 (chosen automatically by cross-validation). Using domain-specific information, one can achieve better results. For instance [17] achieves 2.7% error using tangent distance
to a large number of prototypes. Other preprocessing steps, synthetic training images, or more advanced skew-correction and normalization techniques have been applied to lower the error rate for
SVM (e.g. 4.1% in [20]). While we have avoided extensive preprocessing here, so as to isolate the
effect of the classifier, such preprocessing can be readily incorporated into our framework.
Face Recognition. We further verify MICL?s effectiveness on sparsely sampled high-dimensional
data using the Yale Face Database B [7], which tests illumination sensitivity of face recognition
algorithms. Following [7, 11], we use subsets 1 and 2 for training, and report the average test error
across the four subsets. We apply Algorithm 1, not the local or kernel version, with ? = 75. MICL
significantly outperforms classical subspace techniques on this problem (see Table 2), with error
0.9% near the best reported results in [7, 11] that were obtained using a domain-specific model of
7
Method
MICL
Subspace [7]
Error
0.9%
4.6%
Method
Eigenface [7]
Best [11]
Error
25.8%
0%
Subsets 1,2 (training)
Subsets 1-4 (testing)
Table 2: Face recognition under widely varying illumination. MICL outperforms classical face
recognition methods such as Eigenfaces on Yale Face Database B [7].
illumination for face images. We suggest that the source of this improved performance is precisely
the regularization induced by lossy coding. In this problem the number of training vectors per class,
19, is small compared to the dimension, n = 32, 256 (for raw 168 ? 192 images). Simulations (e.g.
Figure 2) show that this is exactly the circumstance in which MICL is superior to MAP and even
RDA. Interestingly, this suggests that directly exploiting degenerate or low-dimensional structures
via MICL renders dimensionality reduction before classifying unnecessary or even undesirable.
4
Conclusion
We have proposed and studied a new information theoretic classification criterion, Minimum Incremental Coding Length (MICL), establishing its optimality for Gaussian data. MICL generates a
family of classifiers that inherit many of the good properties of MAP, RDA, and k-NN, while extending their working conditions to sparsely sampled or degenerate high-dimensional observations.
MICL and its kernel and local versions approach best reported performance on high-dimensional visual recognition problems without domain-specific engineering. Due to its simplicity and flexibility,
we believe MICL can be successfully applied to a wide range of real-world classification problems.
References
[1] A. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling.
IEEE Transactions on Information Theory, 44(6):2743?2760, 1998.
[2] R. Basri and D. Jacobs. Lambertian reflection and linear subspaces. PAMI, 25(2):218? 233, 2003.
[3] P. Bickel and B. Li. Regularization in statistics. TEST, 15(2):271?344, 2006.
[4] C. Chang and C. Lin. LIBSVM: a library for support vector machines, 2001.
[5] T. Cover and J. Thomas. Elements of Information Theory. Wiley Series in Telecommunications, 1991.
[6] J. Friedman. Regularized discriminant analysis. JASA, 84:165?175, 1989.
[7] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face
recognition under variable lighting and pose. PAMI, 23(6):643?660, 2001.
[8] P. Grunwald and J. Langford. Suboptimal behaviour of Bayes and MDL in classification under misspecification. In Proceedings of Conference on Learning Theory, 2004.
[9] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2001.
[10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[11] K. Lee, J. Ho, and D. Kriegman. Acquiring linear subspaces for face recognition under variable lighting.
PAMI, 27(5):684?698, 2005.
[12] J. Li. A source coding approach to classification by vector quantization and the principle of minimum
description length. In IEEE DCC, pages 382?391, 2002.
[13] Y. Ma, H. Derksen, W. Hong, and J. Wright. Segmentation of multivariate mixed data via lossy data
coding and compression. PAMI, 29(9):1546?1562, 2007.
[14] D. MacKay. Developments in probabilistic modelling with neural networks ? ensemble learning. In Proc.
3rd Annual Symposium on Neural Networks, pages 191?198, 1995.
[15] M. Madiman, M. Harrison, and I. Kontoyiannis. Minimum description length vs. maximum likelihood in
lossy data compression. In IEEE International Symposium on Information Theory, 2004.
[16] J. Rissanen. Modeling by shortest data description. Automatica, 14:465?471, 1978.
[17] P. Simard, Y. LeCun, and J. Denker. Efficient pattern recognition using a new transformation distance. In
Proceedings of NIPS, volume 5, 1993.
[18] P. Simard, D. Steinkraus, and J. Platt. Best practice for convolutional neural networks applied to visual
document analysis. In ICDAR, pages 958?962, 2003.
[19] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In Allerton, 1999.
[20] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 2000.
[21] J. Wright, Y. Tao, Z. Lin, Y. Ma, and H. Shum. Classification via minimum incremental coding length
(MICL). Technical report, UILU-ENG-07-2201, http://perception.csl.uiuc.edu/coding, 2007.
8
| 3314 |@word trial:1 version:8 polynomial:3 compression:5 seek:1 simulation:2 covariance:6 jacob:1 eng:1 tr:1 carry:1 reduction:2 contains:1 series:1 njk:5 shum:2 document:2 interestingly:1 outperforms:7 existing:2 recovered:1 com:1 readily:1 john:1 shape:1 v:3 plane:1 xk:1 isotropic:2 ith:1 provides:5 detecting:1 codebook:1 allerton:1 along:2 symposium:2 prove:2 consists:1 manner:1 expected:2 behavior:5 uiuc:2 relying:1 steinkraus:1 automatically:1 csl:1 becomes:1 classifies:1 moreover:1 what:2 minimizes:5 transformation:2 dubbed:1 guarantee:1 exactly:1 classifier:30 demonstrates:2 rm:5 k2:1 platt:1 grant:1 positive:2 before:1 engineering:1 local:17 encoding:1 analyzing:1 establishing:1 approximately:3 ap:2 pami:4 chose:1 studied:1 suggests:1 challenging:1 range:1 practical:1 lecun:2 testing:1 practice:4 definite:1 differs:1 digit:6 procedure:1 empirical:1 universal:1 significantly:3 radial:1 performace:1 suggest:2 close:1 selection:3 undesirable:1 risk:8 context:2 py:4 conventional:4 map:37 equivalent:2 center:1 go:1 independently:1 simplicity:1 assigns:1 pure:1 insight:1 rule:3 handle:1 variation:1 limiting:2 suppose:2 homogeneous:1 us:3 origin:1 element:2 approximated:1 recognition:12 sparsely:5 labeled:2 database:3 bottom:2 cloud:1 region:3 complexity:6 miny:1 reward:4 rigorously:2 kriegman:2 raise:1 serve:1 basis:1 usps:2 multimodal:1 georghiades:1 represented:2 various:1 kolmogorov:1 effective:5 neighborhood:4 choosing:2 whose:4 encoded:2 larger:1 widely:1 distortion:16 cov:1 statistic:1 jointly:1 sequence:2 eigenvalue:2 product:2 combining:1 degenerate:12 achieve:5 kx1:1 flexibility:1 description:6 intuitive:1 pronounced:2 exploiting:2 convergence:2 extending:1 produce:2 generating:4 incremental:8 converges:5 pose:2 finitely:1 nearest:4 indicate:1 implies:1 differ:1 radius:1 correct:1 human:3 engineered:1 implementing:1 eigenface:1 behaviour:1 assign:2 generalization:1 rda:11 extension:3 correction:1 lying:1 sufficiently:2 wright:3 exp:1 stabilizes:1 vary:1 achieves:5 bickel:1 xk2:1 estimation:5 proc:1 applicable:1 label:12 sensitive:1 tf:1 successfully:1 gaussian:23 modified:1 rather:1 pn:1 cr:1 lifted:2 varying:2 corollary:1 encode:1 ax:1 properly:3 improvement:1 rank:3 likelihood:4 indicates:1 modelling:1 contrast:3 sense:2 nn:15 typically:2 entire:1 kernelized:2 transformed:1 selects:1 tao:2 arg:4 classification:22 among:2 issue:2 ill:1 overall:2 development:1 yip:1 mackay:1 marginal:1 construct:3 identical:2 placing:1 yu:1 unsupervised:2 nearly:1 t2:1 report:2 few:2 preserve:1 resulted:1 interpolate:1 argmax:1 microsoft:2 friedman:2 interest:1 mdl:7 ambient:3 necessary:1 qda:3 plotted:1 theoretical:1 instance:3 classify:1 modeling:2 cover:1 measuring:1 assignment:1 cost:3 uilu:1 subset:4 uniform:1 submanifold:2 tishby:1 reported:3 corrupted:1 synthetic:2 density:2 international:1 sensitivity:1 kontoyiannis:1 lee:1 probabilistic:1 quickly:1 imagery:3 again:2 simard:2 toy:1 li:2 account:3 coding:45 coordinated:1 explicitly:1 view:1 analyze:2 red:2 bayes:4 complicated:1 minimize:1 convolutional:1 efficiently:1 ensemble:1 correspond:1 nonsingular:1 generalize:1 handwritten:5 bayesian:1 raw:1 iid:2 dub:1 lighting:3 visualizes:1 minj:2 centering:1 against:1 involved:2 naturally:2 associated:6 proof:1 gain:4 sampled:5 dataset:5 icl:3 popular:4 knowledge:1 lim:2 dimensionality:3 improves:1 subsection:2 segmentation:1 sophisticated:1 higher:1 dcc:1 supervised:1 asia:1 specify:1 harness:2 improved:1 shrink:1 strongly:1 furthermore:1 just:1 implicit:2 langford:1 working:1 horizontal:1 nonlinear:6 lossy:18 believe:1 name:1 effect:9 verify:1 requiring:1 true:3 ccf:1 counterpart:1 regularization:10 hence:2 lenet:1 symmetric:1 laboratory:1 coincides:1 criterion:34 hong:1 allowable:4 theoretic:2 ridge:1 demonstrate:1 reflection:1 image:8 superior:3 behaves:1 empirically:1 volume:2 anisotropic:2 extend:1 interpretation:1 approximates:3 discussed:1 belong:1 significant:1 smoothness:1 rd:1 zhouchen:1 illinois:1 softening:1 gratefully:1 stable:1 posterior:2 multivariate:2 perspective:1 scenario:1 n00014:1 onr:1 binary:1 outperforming:3 inconsistency:1 yi:6 accomplished:1 minimum:17 additional:6 somewhat:2 mr:1 belhumeur:1 converge:1 shortest:1 ii:1 smoother:2 full:1 desirable:1 infer:1 champaign:1 technical:1 cross:2 lin:3 equally:1 coded:1 variant:2 regression:1 vision:2 expectation:1 essentially:2 circumstance:1 yeung:1 kernel:20 represent:2 normalization:2 achieved:2 justified:1 background:1 whereas:1 separately:1 harrison:1 source:4 appropriately:1 specially:1 subject:6 induced:3 isolate:1 inconsistent:1 effectiveness:1 near:3 ideal:1 vital:1 bengio:1 xj:14 hastie:1 suboptimal:1 inner:2 idea:1 prototype:1 haffner:1 det:5 bottleneck:2 penalty:1 render:2 interpolates:1 dramatically:1 useful:1 amount:1 locally:2 induces:2 generate:1 http:1 nsf:3 notice:5 estimated:2 per:1 tibshirani:1 discrete:1 group:1 four:1 nevertheless:3 rissanen:2 pj:3 libsvm:1 heung:1 asymptotically:7 cone:1 angle:1 parameterized:1 telecommunication:1 family:4 reasonable:1 almost:2 draw:2 decision:10 bit:22 bound:2 yale:2 quadratic:2 annual:1 extrapolates:1 constraint:1 infinity:1 precisely:1 x2:8 generates:2 optimality:2 min:2 px:9 relatively:1 softened:1 combination:2 across:3 smaller:1 slightly:2 derksen:1 partitioned:1 modification:3 taken:3 ln:3 discus:1 skew:1 fail:1 icdar:1 needed:10 gaussians:5 rewritten:1 competitively:1 apply:3 denker:1 lambertian:1 barron:1 generic:2 upto:4 appropriate:1 alternative:1 ho:1 original:1 thomas:1 top:1 clustering:2 ensure:1 include:1 log2:14 exploit:2 restrictive:1 especially:1 establish:1 classical:4 objective:1 question:1 quantity:1 parametric:1 traditional:1 surrogate:3 losslessly:3 nr:1 countered:1 unclear:1 subspace:7 distance:2 gradient:1 bialek:1 parametrized:1 discriminant:4 assuming:1 length:30 code:13 relationship:5 minimizing:1 lg:2 difficult:1 potentially:1 implementation:1 proper:1 unknown:2 perform:2 allowing:2 upper:1 observation:9 datasets:1 urbana:1 acknowledge:1 finite:8 incorrectly:1 situation:1 incorporated:1 precise:3 misspecification:1 rn:5 arbitrary:2 introduced:2 pair:1 required:1 extensive:1 learned:1 extrapolate:1 established:1 nip:1 address:1 below:1 pattern:1 xm:2 perception:1 summarize:1 including:1 difficulty:3 regularized:7 advanced:1 representing:2 scheme:4 improve:2 lossless:2 library:1 prior:4 tangent:1 finitesample:1 asymptotic:8 relative:2 embedded:4 fully:1 unsurprisingly:1 mixed:1 proven:2 validation:2 incurred:1 degree:2 jasa:1 principle:9 nondegenerate:4 classifying:2 row:1 accounted:1 last:1 jth:1 drastically:1 side:1 neighbor:4 eigenfaces:1 face:13 wide:1 sparse:4 boundary:10 dimension:16 world:1 author:1 adaptive:1 preprocessing:5 ig:1 avoided:1 transaction:1 excess:3 approximate:1 implicitly:2 basri:1 ml:2 xt1:2 assumed:1 unnecessary:1 automatica:1 quintessential:1 xi:8 table:5 nature:2 career:1 inherently:1 bottou:1 poly:3 constructing:1 domain:5 diag:2 inherit:1 pk:1 dense:1 noise:1 n2:2 verifies:1 repeated:1 x1:7 fig:2 referred:1 grunwald:1 wiley:1 inferring:1 pereira:1 explicit:1 lie:2 theorem:5 specific:6 svm:18 evidence:1 dominates:1 exists:1 mnist:2 quantization:1 vapnik:1 effectively:1 illumination:4 conditioned:1 demand:1 visual:3 chang:1 springer:2 acquiring:1 nested:2 ma:3 conditional:3 goal:1 viewed:1 identity:1 rbf:5 replace:3 considerable:1 typical:2 specifically:1 principal:1 admittedly:1 total:2 shannon:3 support:3 handling:1 |
2,553 | 3,315 | Collective Inference on Markov Models
for Modeling Bird Migration
Daniel Sheldon
M. A. Saleh Elmohamed
Dexter Kozen
Cornell University
Ithaca, NY 14853
{dsheldon,kozen}@cs.cornell.edu
saleh@cam.cornell.edu
Abstract
We investigate a family of inference problems on Markov models, where many
sample paths are drawn from a Markov chain and partial information is revealed
to an observer who attempts to reconstruct the sample paths. We present algorithms and hardness results for several variants of this problem which arise by revealing different information to the observer and imposing different requirements
for the reconstruction of sample paths. Our algorithms are analogous to the classical Viterbi algorithm for Hidden Markov Models, which finds the single most
probable sample path given a sequence of observations. Our work is motivated by
an important application in ecology: inferring bird migration paths from a large
database of observations.
1
Introduction
Hidden Markov Models (HMMs) assume a generative model for sequential data whereby a sequence
of states (or sample path) is drawn from a Markov chain in a hidden experiment. Each state generates
an output symbol from alphabet ?, and these output symbols constitute the data or observations. A
classical problem, solved by the Viterbi algorithm, is to find the most probable sample path given
certain observations for a given Markov model. We call this the single path problem; it is well suited
to labeling or tagging a single sequence of data. For example, HMMs have been successfully applied
in speech recognition [1], natural language processing [2], and biological sequencing [3].
We introduce two generalizations of the single path problem for performing collective inference on
Markov models, motivated by an effort to model bird migration patterns using a large database of
static observations. The eBird database hosted by the Cornell Lab of Ornithology contains millions
of bird observations from throughout North America, reported by the general public using the eBird
web application.1 Observations report location, date, species and number of birds observed. The
eBird data set is very rich; the human eye can easily discern migration patterns from animations
showing the observations as they unfold over time on a map of North America.2 However, the
eBird data are static, and they do not explicitly record movement, only the distributions at different
points in time. Conclusions about migration patterns are made by the human observer. Our goal is
to build a mathematical framework to infer dynamic migration models from the static eBird data.
Quantitative migration models are of great scientific and practical import: for example, this problem
arose out of an interdisciplinary project at Cornell University to model the possible spread of avian
influenza in North America through wild bird migration.
The migratory behavior for a species of birds can be modeled using a single generative process
that independently governs how individual birds fly between locations, giving rise to the following
1
2
http://ebird.org
http://www.avianknowledge.net/visualization
1
inference problem: a hidden experiment simultaneously draws many independent sample paths from
a Markov chain, and the observations reveal aggregate information about the collection of sample
paths at each time step, from which the observer attempts to reconstruct the paths. For example, the
eBird data estimate the geographical distribution of a species on successive days, but do not track
individual birds.
We discuss two problems within this framework. In the multiple path problem, we assume that
exactly M independent sample paths are drawn from the Markov model, and the observations reveal
the number of paths that output symbol ? at time t, for each ? and t. The observer seeks the
most likely collection of paths given the observations. The fractional path problem is a further
generalization in which paths are divisible entities. The observations reveal the fraction of paths that
output symbol ? at time t, and the observer?s job is to find the most likely (in a sense to be defined
later) weighted collection of paths given the observations. Conceptually, the fractional path problem
can be derived from the multiple path problem by letting M go to infinity; or it has a probabilistic
interpretation in terms of distributions over paths.
After discussing some preliminaries in section 2, sections 3 and 4 present algorithms for the multiple
and fractional path problems, respectively, using network flow techniques on the trellis graph of the
Markov model. The multiple path problem in its most general form is NP-hard, but can be solved
as an integer program. The special case when output symbols uniquely identify their associated
states can be solved efficiently as a flow problem; although the single path problem is trivial in this
case, the multiple and fractional path problems remain interesting. The fractional path problem can
be solved by linear programming. We also introduce a practical extension to the fractional path
problem, including slack variables allowing the solution to deviate slightly from potentially noisy
observations. In section 5, we demonstrate our techniques with visualizations for the migration of
Archilochus colubris, the Ruby-throated Hummingbird, devoting some attention to a challenging
problem we have neglected so far: estimating species distributions from eBird observations.
We briefly mention some related work. Caruana et al. [4] and Phillips et al. [5] used machine
learning techniques to model bird distributions from observations and environmental features. For
problems on sequential data, many variants of HMMs have been proposed [3], and recently, conditional random fields (CRFs) have become a popular alternative [6]. Roth and Yih [7] present an
integer programming inference framework for CRFs that is similar to our problem formulations.
2
2.1
Preliminaries
Data Model and Notation
A Markov model (V, p, ?, ?) is a Markov chain with state set V and transition probabilities p(u, v)
for all u, v ? V . Each state generates a unique output symbol from alphabet ?, given by the mapping
? : V ? ?. Although some presentations allow each state to output multiple symbols with different
emission probabilities, we lose no generality assuming that each state emits a unique symbol ? to
encode a model where state v output multiple symbols, we simply duplicate v for each symbol and
encode the emission probabilities into the transitions. Of course, ? need not be one-to-one. It is
useful to think of ? as a partition of the states, letting V? = ? ?1 (?) be the set of all states that
output ?. We assume each model has a distinguished start state s and output symbol start.
Let Y = V T be the set of all possible sample paths of length T . We represent a path y ? Y as a row
vector y = (y1 , . . . , yT ), and a collection of M paths as the M ? T matrix Y = (yit ), with each
row yi? representing an independent sample path. The transition probabilities induce a distribution
QT ?1
? on Y, where ?(y) = t=1 p(yt , yt+1 ). We will also consider arbitrary distributions ? over Y,
letting Y = (Y1 , . . . , YT ) denote a random path from ?. Then, for example, we write Pr? [Yt = u]
to be the probability under ? that the tth state is u, and E? [f (Y )] to be the expected value of f (Y )
for any function f of a random path Y drawn from ?. Note that Y (boldface) denotes a matrix of
M paths, while Y denotes a random path.
2.2
The Trellis Graph and Viterbi as Shortest Path
To develop our flow-based algorithms, it is instructive to build upon a shortest-path interpretation of
the Viterbi algorithm [7]. In an instance of the single path problem we are given a model (V, p, ?, ?)
2
p(u, u)
u
u
s,
p(
)
V0
V0
u
s,
c(
V0
s
V0
V0
V1
V1
V1
0
1
v
V1
)
w)
V1
V1
w
start
V0
,w
v
)
c(u
u,
p(
s
c(u, u)
u
w
0
0
1
0
start
Observations
Observations
(a)
(b)
Figure 1: Trellis graph for Markov model with states {s, u, v, w} and alphabet {start, 0, 1}. States u
and v output the symbol 0, and state w outputs the symbol 1. (a) The bold path is feasible for the specified
observations, with probability p(s, u)p(u, u)p(u, w). (b) Infeasible edges have been removed (indicated by
light dashed lines), and probabilities changed to costs. The bold path has cost c(s, u) + c(u, u) + c(u, w).
and observations ?1 , . . . , ?T , and we seek the most probable path y given these observations. We
call path y feasible if ?(yt ) = ?t for all t; then we wish to maximize ?(y) over feasible y. The
problem is conveniently illustrated using the trellis graph of the Markov model (Figure 1). Here, the
states are replicated for each time step, and edges connect a state at time t to its possible successors
at time t + 1, labeled with the transition probability. A feasible path must pass through partition
V?t at step t, so we can prune all edges incident on other partitions, leaving only feasible paths. By
defining the cost of an edge as c(u, v) = ? log p(u, v), and letting the path cost c(y) be the sum
of its edge costs, straightforward algebra shows that arg maxy ?(y) = arg miny c(y), i.e., the path
of maximum probability becomes the path of minimum cost under this transformation. Thus the
Viterbi algorithm finds the shortest feasible path in the trellis using edge lengths c(u, v).
3
Multiple Path Problem
In the multiple path problem, M sample paths are drawn from the model and the observations reveal
the number of paths Nt (?) that output ? at time t, for all ? and t; or, equivalently, the multiset At
of output symbols at time t. The objective is to find the most probable collection Y that is feasible,
meaning it produces multisets A1 , . . . , AT . The probability ?(Y) is just the product of the path-wise
probabilities:
?1
M
M TY
Y
Y
?(Y) =
?(yi ) =
p(yi,t , yi,t+1 ).
(1)
i=1
i=1 t=1
Then the formal specification of this problem is
max ?(Y) subject to |{i : yi,t ? V? }| = Nt (?) for all ?, t.
Y
3.1
(2)
Reduction to the Single Path Problem
A naive approach to the multiple path problem reduces it to the single path problem by creating a new
Markov model on state set V M where state hv1 , . . . , vM i encodes an entire tuple of original states,
and the transition probabilities are given by the product of the element-wise transition probabilities:
p(hu1 , . . . , uM i, hv1 , . . . , vM i) =
M
Y
p(ui , vi ).
i=1
A state from the product space V M corresponds to an entire column of the matrix Y, and by changing the order of multiplication in (1), we see that the probability of a path in the new model is equal
to the probability of the entire collection of paths in the old model. To complete the reduction, we
? whose symbols represent multisets of size M on ?. Then the solution to (2)
form a new alphabet ?
can be found by running the Viterbi algorithm to find the most likely sequence of states from V M
that produce output symbols (multisets) A1 , . . . , AT . The running time is polynomial in |V M | and
? but exponential in M .
|?|,
3
3.2
Graph Flow Formulation
Can we do better than the naive approach? Viewing the cost of a path as the cost of routing one
unit of flow along that path in the trellis, a minimum cost collection of M paths is equivalent to a
minimum cost flow of M units through the trellis ? given M paths, we can route one unit along each
to get a flow, and we can decompose any flow of M units into paths each carrying a single unit of
flow. Thus we can write the optimization problem in (2) as the following flow integer program, with
additional constraints that the flow paths generate the correct observations. The decision variable
xtuv indicates the flow traveling from u to v at time t; or, the number of sample paths that transition
from u to v at time t.
X
min
c(u, v)xtuv
u,v,t
X
s.t.
(IP)
u
X
xtuv =
X
xt+1
vw
for all v, t,
(3)
for all ?, t,
(4)
w
xtuv = Nt (?)
u?V? ,v?V
xtuv ? N
for all u, v, t.
The flow conservation constraints (3) are standard: the flow into v at time t is equal to the flow
leaving v at time t + 1. The observation constraints (4) specify that Nt (?) units of flow leave
partition V? at time t. These also imply that exactly M units of flow pass through each level of the
trellis, by summing over all ?,
X
X X
X
xtuv =
xtuv =
Nt (?) = M.
u,v
? u?V? ,v?V
?
Without the observation constraints, IP would be an instance of the minimum-cost flow problem [8],
which is solvable in polynomial time by a variety of algorithms [9]. However, we cannot hope to
encode the observation constraints into the flow framework, due to the following result.
Lemma 1. The multiple path problem is NP-hard.
The proof of Lemma 1 is by reduction from SET COVER, and is omitted. One may use a general
purpose integer program solver to solve IP directly; this may be efficient in some cases despite the
lack of polynomial time performance guarantees. In the following sections we discuss alternatives
that are efficiently solvable.
3.3
An Efficient Special Case
In the special case when ? is one-to-one, the output symbols uniquely identify their generating
states, so we may assume that ? = V , and the output symbol is always the name of the current state.
To see how the problem IP simplifies, we now have Vu = {u} for all u, so each partition consists of
a single state, and the observations completely specify the flow through each node in the trellis:
X
xtuv = Nt (u) for all u, t.
(40 )
v
Substituting the new observation constraints (40 ) for time t+1 into the RHS of the flow conservation
constraints (3) for time t yield the following replacements:
X
xtuv = Nt+1 (v) for all v, t.
(30 )
u
This gives an equivalent set of constraints, each of which refers only to variables xtuv for a single
t. Hence the problem can be decomposed into T ? 1 disjoint subproblems for t = 1, . . . , T ? 1.
The tth subproblem IPt is given in Figure 2(a), and illustrated on the trellis in Figure 2(b). State
u at time t has a supply of Nt (u) units of flow coming from the previous step, and we must route
Nt+1 (v) units of flow to state v at time t + 1, so we place a demand of Nt+1 (v) at the corresponding
node. Then the problem reduces to finding a minimum cost routing of the supply from time t to meet
the demand at time t + 1, solved separately for all t = 1, . . . , T ? 1. The problem IPt an instance
of the transportation problem [10], a special case of the minimum-cost flow problem. There are a
variety of efficient algorithms to solve both problems [8, 9], or one may use a general purpose linear
program (LP) solver; any basic solution to the LP relaxation of IPt is guaranteed to be integral [8].
4
min
X
c(u, v)xtuv
(IPt )
Nt+1 (?)
Demand
v1
u,v
s.t.
Nt (?)
Supply
X
xtuv = Nt+1 (v)
(30 )
?v,
4
v2
v2
2
u
X
v1
c(v1 , v1 )
3
xtuv
0
?u,
= Nt (u)
(4 )
0
v3
v3
0
v
xtuv ? N
(a)
?u, v.
1
t+1
t
(b)
Figure 2: (a) The definition of subproblem IPt . (b) Illustration on the trellis.
4
Fractional Path Problem
In the fractional path problem, a path is a divisible entity. The observations specify qt (?), the
fraction of paths that output ? at time t, and the observer chooses ?(y) fractional units of each
path
one unit, such that qt (?) units output ? at time t. The objective is to maximize
Q y, totaling
?(y)
?(y)
.
Put another way, ? is a distribution over paths such that Pr? [Yt ? V? ] = qt (?),
y?Y
i.e., qt specifies the marginal distribution over symbols at time t. By taking the logarithm, an equivalent objective is to maximize E? [log ?(Y )], so we seek the distribution ? that maximizes the
expected log-probability of a path Y drawn from ?. Conceptually, the fractional path problem arises
by letting M ? ? in the multiple path problem and normalizing to let qt (?) = Nt (?)/M specify
the fraction of paths that output ? at time t. Operationally, the fractional path problem is modeled
by the LP relaxation of IP, which routes one splittable unit of flow through the trellis.
X
min
c(u, v)xtuv
u,v,t
s.t.
(RELAX)
X
u
X X
xtuv =
X
xt+1
vw
for all v, t,
w
xtuv = qt (?)
for all ?, t,
(5)
u?V? v?V
xtuv ? 0
for all u, v, t.
It is easy to see that a unit flow x corresponds to a probability distribution ?. Given any distribution
?, let xtuv = Pr? [Yt = u, Yt+1 = v]; then x is a flow because the probability a path enters v at
time t is equal to the probability it leaves v at time t + 1. Conversely, given a unit flow x, any path
decomposition assigning flow ?(y) to each y ? Y is a probability distribution because the total flow
is one. In general, the decomposition is not unique, but any choice yields a distribution ? with the
same objective value. Furthermore, under this correspondence, x satisfies the marginal constraints
(5) if and only if ? has the correct marginals:
X X
X X
X
Pr [Yt = u] = Pr [Yt ? V? ] .
xtuv =
Pr [Yt = u, Yt+1 = v] =
u?V? v?V
u?V? v?V
u?V?
Finally, we can rewrite the objective function in terms of paths:
X
X
c(u, v)xtuv =
?(y)c(y) = E? [c(Y )] = E? [? log ?(Y )] .
u,v,t
y?Y
By switching signs and changing from minimization to maximization, we see that RELAX solves
the fractional path problem. This problem is very similar to maximum entropy or minimum cross
entropy modeling, but the details are slightly different: such a model would typically find the distribution ? with the correct marginals that minimizes the cross entropy or Kullback-Leibler divergence [11] between ? and ?, which, after removing a constant term, reduces to minimizing
E? [? log ?(Y )]. Like IP, the RELAX problem also decomposes into subproblems in the case when
? is one-to-one, but this simplification is incompatible with the slack variables introduced in the
following section.
5
4.1
Incorporating Slack
In our application, the marginal distributions qt (?) are themselves estimates, and it is useful to allow
the LP to deviate slightly from these marginals to find a better overall solution. To accomplish this,
we add slack variables ?ut into the marginal constraints (5), and charge for the slack in the objective
function. The new marginal constraints are
X X
xtuv = qt (?) + ??t
for all ?, t,
(50 )
u?V? v?V
and we add the term ?,t ??t |??t | into the objective function to charge for the slack, using a standard
LP trick [8] to model the absolute value term. The slack costs ??t can be tailored to individual input
values; for example, one may want to charge more to deviate from a confident estimate. This will
depend on the specific application. We also add the necessary constraints to ensure that the new
marginals qt0 (?) = qt (?) + ??t form a valid probability distribution for all t.
P
5
Demonstration
In this section, we demonstrate our techniques by using the fractional path problem to create visualizations showing likely migration routes of Archilochus colubris, the Ruby-throated Hummingbird,
a common bird whose range is relatively well covered by eBird observations. We work in discretized space and time, dividing the map into grid cells and the year into weeks. We must specify
the Markov model governing transitions between locations (grid cells) in successive weeks; also,
we require estimates qt (?) for the weekly distributions of hummingbirds across locations. Since the
actual eBird observations are highly non-uniform in space and time, estimating weekly distributions requires significant inference for locations with few or no observations. In the appendix, we
outline one approach based on harmonic energy minimization [12], but we may use any technique
that produces weekly distributions qt (u) and slack costs ?ut . Improving these estimates, say, by
incorporating important side information such as climate and habitat features, could significantly
improve the overall model. Finally, although our final observations qt (?) are distributions over states
(locations) and not output symbols ? i.e., ? is one-to-one ? we cannot use the simplification from
section 3.3 because we incorporate slack into the model.
5.1
eBird Data
Launched in 2002, eBird is a citizen science project run by the Cornell Lab of Ornithology, leveraging the data gathering power of the public. On the eBird website, birdwatchers submit checklists
of birds they observe, indicating a count for each species, along with the location, date, time and
additional information. Our data set consists of the 428,648 complete checklists from 19953 through
2007, meaning the reporter listed all species observed. This means we can infer a count of zero, or
a negative observation, for any species not listed. Using a land cover map from the United States
Geological Survey (USGS), we divide North America into grid cells that are roughly 225 km on a
side. All years of data are aggregated into one, and the year is divided into weeks so t = 1, . . . , 52
represents the week of the year.
5.2
Migration Inference
Given weekly distributions qt (u) and slack costs ?ut (see the appendix), it remains to specify
the Markov model. We use a simple Gaussian model favoring short flights, letting p(u, v) ?
exp(?d(u, v)2 /? 2 ), where d(u, v) measures the distance between grid cell centers. This corresponds to a squared distance cost function. To reduce problem size, we omitted variables xtuv from
the LP when d(u, v) > 1350 km, effectively setting p(u, v) = 0. We also found it useful to impose
upper bounds ?ut ? qt (u) on the slack variables so no single value could increase by more than a
factor of two. Our final LP, which was solved using the MOSEK optimization toolbox, had 78,521
constraints and 3,031,116 variables.
Figure 3 displays the migration paths our model inferred for the four weeks starting on the dates
indicated. The top row shows the distribution and paths inferred by the model; grid cells colored
3
Users may enter historical observations.
6
Week 10
March 5
Week 20
May 14
Week 30
July 28
Week 40
October 1
Figure 3: Ruby-throated Hummingbird migration. See text for description.
in lighter shades have more birds (higher values for qt0 (u)). Arrows indicate flight paths (xtuv )
between the week shown and the following week, with line width proportional to flow xtuv . In
the bottom row, the raw data is given for comparison. White dots indicate negative observations;
black squares indicate positive observations, with size proportional to count. Locations with both
positive and negative observations appear a charcoal color. The inferred distributions and paths are
consistent with both seasonal ranges and written accounts of migration routes. For example, in the
summary paragraph on migration from the Archilochus colubris species account in Birds of North
America [13], Robinson et al. write ?Many fly across Gulf of Mexico, but many also follow coastal
route. Routes may differ for north- and southbound birds.?
Acknowledgments
We are grateful to Daniel Fink, Wesley Hochachka and Steve Kelling from the Cornell Lab of
Ornithology for useful discussions. This work was supported in part by ONR Grant N00014-01-10968 and by NSF grant CCF-0635028. The views and conclusions herein are those of the authors
and do not necessarily represent the official policies or endorsements of these organizations or the
US Government.
References
[1] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77(2):257?286, 1989.
[2] E. Charniak. Statistical techniques for natural language parsing. AI Magazine, 18(4):33?44, 1997.
[3] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological sequence analysis: Probabilistic models of
proteins and nucleic acids. Cambridge University Press, 1998.
[4] R. Caruana, M. Elhawary, A. Munson, M. Riedewald, D. Sorokina, D. Fink, W. M. Hochachka, and
S. Kelling. Mining citizen science data to predict prevalence of wild bird species. In SIGKDD, 2006.
[5] S. J. Phillips, M. Dud??k, and R. E. Schapire. A maximum entropy approach to species distribution modeling. In ICML, 2004.
[6] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. ICML, 2001.
[7] D. Roth and W. Yih. Integer linear programming inference for conditional random fields. ICML, 2005.
7
[8] V. Chv?atal. Linear Programming. W.H. Freeman, New York, NY, 1983.
[9] A. V. Goldberg, S. A. Plotkin, and E. Tardos. Combinatorial algorithms for the generalized circulation
problem. Math. Oper. Res., 16(2):351?381, 1991.
[10] G. B. Dantzig. Application of the simplex method to a transportation problem. In T. C. Koopmans,
editor, Activity Analysis of Production and Allocation, volume 13 of Cowles Commission for Research in
Economics, pages 359?373. Wiley, 1951.
[11] J. Shore and R. Johnson. Properties of cross-entropy minimization. IEEE Trans. on Information Theory,
27:472?482, 1981.
[12] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic
functions. In ICML, 2003.
[13] T. R. Robinson, R. R. Sargent, and M. B. Sargent. Ruby-throated Hummingbird (Archilochus colubris). In
A. Poole and F. Gill, editors, The Birds of North America, number 204. The Academy of Natural Sciences,
Philadelphia, and The American Ornithologists? Union, Washington, D.C., 1996.
[14] D. Aldous and J. Fill. Reversible Markov Chains and Random Walks on Graphs. Monograph in Preparation, http://www.stat.berkeley.edu/users/aldous/RWG/book.html.
A
Estimating Weekly Distributions from eBird
Our goal is to estimate qt (u), the fraction of birds in grid cell u during week t. Given enough
observations, we can estimate qt (u) using the average number of birds counted per checklist, a
quantity we call the rate rt (u). However, even for a bird with good eBird coverage, there are cells
with few or no observations during some weeks. To fill these gaps, we use the harmonic energy
minimization technique [12] to determine values for empty cells based on neighbors in space and
time. This technique uses a graph-based similarity structure, in our case the 3-dimensional lattice
built on points ut , where ut represents cell u during week t. Edges are weighted, with weights
representing similarity between points. Point ut is connected to its four grid neighbors in time slice
t by edges of unit weight, excluding edges between cells separated by water (specifically, when the
line connecting the centers is more than half water). Point ut is also connected to points ut?1 and
ut+1 with weight 1/4, to achieve some temporal smoothing.
Harmonic energy minimization learns a function f on the graph; the idea is to match rt (u) on points
with sufficient data and find values for other points according to the similarity structure. To this
end, we designate some boundary points for which the value of f is fixed by the data, while other
points are interior points. The value of f at interior point ut is determined by the expected value
of the following random experiment: perform a random walk starting from ut , following outgoing
edges with probability proportional to their weight. When the walk first hits a boundary point
vt0 , terminate and accept the boundary value f (vt0 ). In this way, the values at interior points are
a weighted average of nearby boundary values, where ?nearness? is interpreted as the absorption
probability in an absorbing random walk. We derive a measure of confidence in the value f (ut )
from the same experiment: let h(ut ) be the expected number of steps for the random walk from ut
to hit the boundary (the hitting time of the boundary set [14]). When h(ut ) is small, ut is close to
the boundary and we are more confident in f (ut ).
Rather than choosing a threshold on the number of observations required to be a boundary point,
we create a soft boundary by designating all points ut as interior points, and adding one boundary
node to the graph structure for each observation, connected by an edge of unit weight to the cell
in which it occurred, with value equal to the number of birds observed. As point ut gains more
observations, its behavior approaches that of a hard boundary: with probability approaching one, the
walk from ut will reach an observation in the first step, so f (ut ) will approach rt (u), the average of
the observations. As a conservative measure, each node is also connected to a sink with boundary
value 0, to prevent values from propagating over very long distances.
We compute h and f iteratively using standard techniques. Since f (ut ) approximates the rate rt (u),
we multiply by the land mass of cell u to get an estimate q?t (u) for the (relative) number
Pof birds
in cell u at time t. Finally, we normalize q? for each time slice t, taking qt (u) = q?t (u)/ u q?t (u).
For slack costs, we set ?ut = ?0 /h(ut ) to be inversely proportional to boundary hitting time, with
?0 ? 261 chosen in conjunction with the transition costs in section 5.2 so the average cost for a unit
of slack is the same as moving 600 km.
8
| 3315 |@word koopmans:1 briefly:1 polynomial:3 km:3 seek:3 decomposition:2 mention:1 yih:2 reduction:3 contains:1 united:1 charniak:1 daniel:2 current:1 nt:15 assigning:1 import:1 parsing:1 must:3 written:1 partition:5 generative:2 operationally:1 leaf:1 website:1 selected:1 half:1 mccallum:1 short:1 record:1 colored:1 nearness:1 multiset:1 node:4 location:8 successive:2 math:1 org:1 mathematical:1 along:3 become:1 supply:3 consists:2 wild:2 paragraph:1 introduce:2 tagging:1 expected:4 hardness:1 roughly:1 themselves:1 behavior:2 discretized:1 freeman:1 decomposed:1 actual:1 solver:2 chv:1 becomes:1 project:2 estimating:3 notation:1 pof:1 maximizes:1 mass:1 interpreted:1 minimizes:1 finding:1 transformation:1 guarantee:1 temporal:1 quantitative:1 berkeley:1 charge:3 fink:2 weekly:5 exactly:2 um:1 hit:2 kelling:2 unit:18 grant:2 appear:1 segmenting:1 positive:2 switching:1 despite:1 meet:1 path:88 black:1 bird:22 dantzig:1 conversely:1 challenging:1 hmms:3 range:2 practical:2 unique:3 acknowledgment:1 vu:1 union:1 prevalence:1 unfold:1 ornithology:3 significantly:1 revealing:1 shore:1 confidence:1 induce:1 refers:1 protein:1 get:2 cannot:2 interior:4 close:1 put:1 www:2 equivalent:3 map:3 roth:2 crfs:2 yt:13 go:1 attention:1 straightforward:1 independently:1 transportation:2 survey:1 starting:2 economics:1 fill:2 analogous:1 tardos:1 user:2 magazine:1 programming:4 lighter:1 goldberg:1 us:1 designating:1 checklist:3 trick:1 element:1 recognition:2 database:3 labeled:1 observed:3 bottom:1 subproblem:2 fly:2 solved:6 enters:1 connected:4 munson:1 movement:1 removed:1 monograph:1 ui:1 miny:1 cam:1 dynamic:1 neglected:1 carrying:1 rewrite:1 depend:1 algebra:1 grateful:1 upon:1 completely:1 sink:1 easily:1 america:6 alphabet:4 separated:1 labeling:2 aggregate:1 choosing:1 whose:2 solve:2 say:1 relax:3 reconstruct:2 think:1 noisy:1 ip:6 final:2 sequence:6 net:1 reconstruction:1 product:3 coming:1 date:3 achieve:1 academy:1 description:1 normalize:1 empty:1 requirement:1 produce:3 generating:1 leave:1 derive:1 develop:1 propagating:1 stat:1 avian:1 qt:18 job:1 solves:1 dividing:1 coverage:1 c:1 krogh:1 indicate:3 differ:1 correct:3 human:2 routing:2 successor:1 viewing:1 public:2 require:1 government:1 generalization:2 preliminary:2 decompose:1 probable:4 biological:2 absorption:1 designate:1 extension:1 exp:1 great:1 viterbi:6 mapping:1 week:14 predict:1 substituting:1 omitted:2 purpose:2 lose:1 combinatorial:1 coastal:1 create:2 successfully:1 hochachka:2 weighted:3 minimization:5 hope:1 always:1 gaussian:2 rather:1 arose:1 dexter:1 cornell:7 totaling:1 conjunction:1 encode:3 derived:1 emission:2 seasonal:1 sequencing:1 indicates:1 sigkdd:1 sense:1 inference:8 entire:3 typically:1 accept:1 hidden:5 favoring:1 arg:2 overall:2 html:1 smoothing:1 special:4 marginal:5 devoting:1 field:4 equal:4 washington:1 represents:2 icml:4 mosek:1 hv1:2 simplex:1 report:1 np:2 duplicate:1 few:2 simultaneously:1 divergence:1 individual:3 replacement:1 ecology:1 attempt:2 organization:1 investigate:1 highly:1 mining:1 multiply:1 light:1 chain:5 citizen:2 edge:11 tuple:1 partial:1 integral:1 necessary:1 old:1 logarithm:1 divide:1 re:1 walk:6 instance:3 column:1 modeling:3 soft:1 cover:2 caruana:2 maximization:1 lattice:1 cost:20 uniform:1 johnson:1 reported:1 commission:1 connect:1 plotkin:1 accomplish:1 chooses:1 confident:2 migration:15 geographical:1 interdisciplinary:1 probabilistic:3 vm:2 connecting:1 squared:1 ipt:5 creating:1 american:1 book:1 oper:1 account:2 bold:2 north:7 explicitly:1 vi:1 later:1 view:1 observer:7 lab:3 start:5 vt0:2 square:1 splittable:1 acid:1 who:1 efficiently:2 circulation:1 yield:2 identify:2 rabiner:1 conceptually:2 raw:1 reach:1 definition:1 ty:1 energy:3 associated:1 proof:1 static:3 emits:1 gain:1 popular:1 color:1 fractional:13 ut:25 eddy:1 wesley:1 steve:1 higher:1 day:1 follow:1 supervised:1 specify:6 formulation:2 generality:1 furthermore:1 just:1 governing:1 traveling:1 flight:2 web:1 reversible:1 lack:1 reveal:4 indicated:2 scientific:1 name:1 ccf:1 hence:1 dud:1 leibler:1 iteratively:1 illustrated:2 climate:1 white:1 during:3 width:1 uniquely:2 whereby:1 generalized:1 ruby:4 outline:1 complete:2 demonstrate:2 meaning:2 wise:2 harmonic:4 recently:1 common:1 absorbing:1 influenza:1 million:1 volume:1 interpretation:2 occurred:1 approximates:1 marginals:4 significant:1 cambridge:1 imposing:1 phillips:2 enter:1 ai:1 grid:7 language:2 had:1 dot:1 moving:1 specification:1 similarity:3 v0:6 add:3 aldous:2 route:7 certain:1 n00014:1 onr:1 discussing:1 yi:5 minimum:7 additional:2 impose:1 gill:1 prune:1 determine:1 aggregated:1 shortest:3 maximize:3 v3:2 dashed:1 july:1 multiple:12 semi:1 infer:2 reduces:3 match:1 cross:3 long:1 divided:1 ebird:15 a1:2 variant:2 basic:1 represent:3 tailored:1 cell:13 want:1 separately:1 leaving:2 ithaca:1 launched:1 subject:1 flow:31 kozen:2 leveraging:1 lafferty:2 call:3 integer:5 vw:2 revealed:1 easy:1 enough:1 divisible:2 variety:2 approaching:1 reduce:1 simplifies:1 idea:1 motivated:2 effort:1 gulf:1 speech:2 york:1 constitute:1 useful:4 governs:1 covered:1 listed:2 tth:2 http:3 generate:1 specifies:1 schapire:1 nsf:1 tutorial:1 sign:1 disjoint:1 track:1 per:1 write:3 four:2 threshold:1 drawn:6 yit:1 changing:2 prevent:1 v1:10 graph:9 relaxation:2 fraction:4 sum:1 year:4 usgs:1 run:1 discern:1 place:1 family:1 throughout:1 draw:1 endorsement:1 decision:1 incompatible:1 appendix:2 bound:1 guaranteed:1 simplification:2 display:1 correspondence:1 durbin:1 activity:1 infinity:1 constraint:13 encodes:1 sheldon:1 nearby:1 generates:2 min:3 performing:1 relatively:1 according:1 march:1 remain:1 slightly:3 across:2 lp:7 maxy:1 pr:6 gathering:1 visualization:3 remains:1 discus:2 slack:13 count:3 letting:6 end:1 observe:1 v2:2 distinguished:1 alternative:2 original:1 denotes:2 running:2 ensure:1 top:1 giving:1 ghahramani:1 build:2 classical:2 objective:7 quantity:1 rt:4 qt0:2 distance:3 reporter:1 entity:2 geological:1 trivial:1 water:2 boldface:1 assuming:1 length:2 modeled:2 illustration:1 minimizing:1 demonstration:1 mexico:1 equivalently:1 october:1 potentially:1 subproblems:2 negative:3 rise:1 collective:2 policy:1 perform:1 allowing:1 upper:1 observation:45 nucleic:1 markov:20 defining:1 excluding:1 y1:2 arbitrary:1 inferred:3 introduced:1 required:1 specified:1 toolbox:1 herein:1 robinson:2 trans:1 poole:1 pattern:3 program:4 built:1 including:1 max:1 power:1 natural:3 solvable:2 hu1:1 zhu:1 representing:2 improve:1 habitat:1 eye:1 imply:1 rwg:1 migratory:1 inversely:1 multisets:3 naive:2 philadelphia:1 deviate:3 text:1 sargent:2 multiplication:1 relative:1 interesting:1 proportional:4 allocation:1 incident:1 sufficient:1 consistent:1 editor:2 land:2 production:1 row:4 course:1 changed:1 summary:1 supported:1 infeasible:1 formal:1 allow:2 side:2 neighbor:2 taking:2 absolute:1 slice:2 boundary:13 transition:9 valid:1 rich:1 author:1 made:1 collection:7 replicated:1 riedewald:1 historical:1 far:1 counted:1 kullback:1 summing:1 conservation:2 mitchison:1 decomposes:1 terminate:1 improving:1 necessarily:1 submit:1 official:1 spread:1 rh:1 arrow:1 arise:1 animation:1 center:2 hosted:1 ny:2 wiley:1 trellis:12 inferring:1 pereira:1 wish:1 exponential:1 learns:1 removing:1 atal:1 shade:1 xt:2 specific:1 showing:2 symbol:20 normalizing:1 incorporating:2 sequential:2 effectively:1 adding:1 demand:3 gap:1 suited:1 entropy:5 simply:1 likely:4 conveniently:1 hitting:2 corresponds:3 environmental:1 satisfies:1 saleh:2 conditional:3 goal:2 presentation:1 feasible:7 hard:3 specifically:1 determined:1 lemma:2 conservative:1 total:1 specie:10 pas:2 indicating:1 arises:1 preparation:1 incorporate:1 outgoing:1 instructive:1 cowles:1 |
2,554 | 3,316 | A configurable analog VLSI neural network with
spiking neurons and self-regulating plastic synapses
which classifies overlapping patterns
M. Giulioni?
Italian National Inst. of Health, Rome, Italy
INFN-RM2, Rome, Italy
giulioni@roma2.infn.it
D. Badoni
INFN-RM2, Rome, Italy
M. Pannunzi
Italian National Inst. of Health, Rome, Italy
INFN-RM1, Rome, Italy
V. Dante
Italian National Inst. of Health, Rome, Italy
INFN-RM1, Rome, Italy
P. Del Giudice
Italian National Inst. of Health, Rome, Italy
INFN-RM1, Rome, Italy
Abstract
We summarize the implementation of an analog VLSI chip hosting a network
of 32 integrate-and-fire (IF) neurons with spike-frequency adaptation and 2,048
Hebbian plastic bistable spike-driven stochastic synapses endowed with a selfregulating mechanism which stops unnecessary synaptic changes. The synaptic
matrix can be flexibly configured and provides both recurrent and AER-based connectivity with external, AER compliant devices. We demonstrate the ability of the
network to efficiently classify overlapping patterns, thanks to the self-regulating
mechanism.
1 Introduction
Neuromorphic analog, VLSI devices [12] try to derive organizational and computational principles
from biologically plausible models of neural systems, aiming at providing in the long run an electronic substrate for innovative, bio-inspired computational paradigms.
In line with standard assumptions in computational neuroscience, neuromorphic devices are endowed with adaptive capabilities through various forms of plasticity in the synapses which connect
the neural elements. A widely adopted framework goes under the name of Hebbian learning, by
which the efficacy of a synapse is potentiated (the post-synaptic effect of a spike is enhanced) if the
pre- and post-synaptic neurons are simultaneously active on a suitable time scale. Different mechanisms have been proposed, some relying on the average firing rates of the pre- and post-synaptic
neurons, (rate-based Hebbian learning), others based on tight constraints on the time lags between
pre- and post-synaptic spikes (?Spike-Timing-Dependent-Plasticity?).
The synaptic circuits described in what follows implement a stochastic version of rate-based Hebbian learning. In the last decade, it has been realized that general constraints plausibly met by any
concrete implementation of a synaptic device in a neural network, bear profound consequences on
?
http://neural.iss.infn.it/
1
the capacity of the network as a memory system. Specifically, once one accepts that a synaptic
element can neither have an unlimited dynamic range (i.e. synaptic efficacy is bounded), nor can it
undergo arbitrarily small changes (i.e. synaptic efficacy has a finite analog depth), it has been proven
([1], [7]) that a deterministic learning prescription implies an extremely low memory capacity, and a
severe ?palimpsest? property: new memories quickly erase the trace of older ones. It turns out that a
stochastic mechanism provides a general, logically appealing and very efficient solution: given the
pre- and post-synaptic neural activities, the synapse is still made eligible for changing its efficacy
according to a Hebbian prescription, but it actually changes its state with a given probability. The
stochastic element of the learning dynamics would imply ad hoc new elements, were it not for the
fact that for a spike-driven implementation of the synapse, the noisy activity of the neurons in the
network can provide the needed ?noise generator? [7]. Therefore, for an efficient learning electronic
network, the implementation of the neuron as a spiking element is not only a requirement of ?biological plausibility?, but a compelling computational requirement. Learning in networks of spiking IF
neurons with stochastic plastic synapses has been studied theoretically [7], [10], [2], and stochastic,
bi-stable synaptic models have been implemented in silicon [8], [6]. One of the limitations so far,
both at the theoretical and the implementation level, has been the artificially simple statistics of the
stimuli to be learnt (e.g., no overlap between their neural representations). Very recently in [4] a
modification of the above stochastic, bi-stable synaptic model has been proposed, endowed with a
regulatory mechanism termed ?stop learning? such that synaptic up or down-regulation depends on
the average activity of the postsynaptic neuron in the recent past; a synapse pointing to a neuron that
is found to be highly active, or poorly active, should not be further potentiated or depressed, respectively. The reason behind the prescription is essentially that for correlated patterns to be learnt by
the network, a successful strategy should de-emphasize the coherent synaptic Hebbian potentiation
that would result for the overlapping part of the synaptic matrix, and that would ultimately spoil the
ability to distinguish the patterns. A detailed learning strategy along this line was proven in [13] to
be appropriate for linearly separable patterns for a Perceptron-like network; the extension to spiking
and recurrent networks is currently studied.
In section 2 we give an overview of the chip architecture and of the implemented synaptic model.
In section 3 we show an example of the measures effectuated on the chip useful to characterize the
synaptic and neuronal parameters. In section 4 we report some characterization results compared
with a theoretical prediction obtained from a chip-oriented simulation. The last paragraph describes
chip performances in a simple classification task, and illustrate the improvement brought about by
the stop-learning mechanism.
2 Chip architecture and main features
The chip, already described in [3] implements a recurrent network of 32 integrate-and-fire neurons
with spike-frequency adaptation and bi-stable, stochastic, Hebbian synapses. A completely reconfigurable synaptic matrix supports up to all-to-all recurrent connectivity, and AER-based external
connectivity. Besides establishing an arbitrary synaptic connectivity, the excitatory/inhibitory nature of each synapse can also be set.
The implemented neuron is the IF neuron with constant leakage term and a lower bound for the
membrane potential V (t) introduced in [12] and studied theoretically in [9]. The circuit is borrowed
from the low-power design described in [11], to which we refer the reader for details. Only 2
neurons can be directly probed (i.e., their ?membrane potential? sampled), while for all of them the
emitted spikes can be monitored via AER [5]. The dendritic tree of each neuron is composed of
up to 31 activated recurrent synapses and up to 32 activated external, AER ones. For the recurrent
synapses, each impinging spike triggers short-time (and possibly long-term) changes in the state of
the synapse, as detailed below. Spikes from neurons outside the chip come in the form of AER
events, and are targeted to the correct AER synapse by the X-Y Decoder. Synapses which are set to
be excitatory, either AER or recurrent are plastic; inhibitory synapses are fixed. Spikes generated by
the neurons in the chip are arbitrated for access to the AER bus for monitoring and/or mapping to
external targets.
The synaptic circuit described in [3] implements the model proposed in [4] and briefly motivated in
the Introduction. The synapse possesses only two states of efficacy (a bi-stable device): the internal
synaptic dynamics is associated with an internal variable X; when X > ?X the efficacy is set to be
2
potentiated, otherwise is set to be depressed. X is subjected to short-term, spike-driven dynamics:
upon the arrival of an impinging spike, X is candidate for an upward or downward jump, depending
on the instantaneous value of the post-synaptic potential Vpost being above or below a threshold ?V .
The jump is actually performed or not depending on a further variable as explained below. In the
absence of intervening spikes X is forced to drift towards a ?high? or ?low? value depending on
whether the last jump left it above or below ?X . This preserves the synaptic efficacy on long time
scale.
A further variable is associated with the post-synaptic neuron dynamics, which essentially measures the average firing activity. Following [4], by analogy with the role played by the intracellular
concentration of calcium ions upon spike emission, we will call it a ?calcium variable? C(t). C(t)
undergoes an upward jump when the postsynaptic neuron emits a spike, and linearly decays between
two spikes. It therefore integrates the spikes sequence and, when compared to suitable thresholds as
detailed below, it determines which candidate synaptic jumps will be allowed to occur; for example,
it can constrain the synapse to stop up-regulating because the post-synaptic neuron is already very
active. C(t) acts as a regulatory element of the synaptic dynamics.
The resulting short-term dynamics for the internal synaptic variable X is described by the following
conditions: X(t) ? X(t) + Jup if Vpost (t) > ?V and VT H1 < C(t) < VT H3 ; X(t) ? X(t) ? Jdw
if Vpost (t) ? ?V and VT H1 < C(t) < VT H2 where Jup and Jdw are positive constants. Detailed
description of circuits implementing these conditions can be found in [3].
In figure 1 we illustrate the effect of the calcium dynamics on X. Increasing input forces the postsynaptic neuron to fire at increasing frequencies. As long as C(t) < VT H2 = VT H3 X undergoes
both up and down jumps. When C(t) > VT H2 = VT H3 jumps are inhibited and X is forced to drift
towards its lower bound.
1V
Vpost(t)
VTH2
C(t)
1V
1V
X(t)
2V
Vpre(t)
40 ms
Figure 1: Illustrative example of the stop-learning mechanism (see text). Top to bottom: postsynaptic neuron potential Vpost , calcium variable C, internal synaptic variable X, pre-synaptic neuron potential Vpre
3 LTP/LTD probabilities: measurement VS chip-oriented simulation
We report synapse potentiation (LTP) / depression (LTD)from the chip and we compare experimental
results to simulations.
For each synapse in a subset of 31, we generate a pre-synaptic poisson spike train at 70 Hz. The
post synaptic neuron is forced to fire a poisson spike train by applying an external DC current and a
poisson train of inhibitory spikes through AER. Setting to zero both the potentiated and depressed
efficacies, the activity of the post-synaptic neuron can be easily tuned by varying the amplitude of
the DC current and the frequency of the inhibitory AER train. We initialize the 31 (AER) synapses
to depressed (potentiated) and we monitor the post-synaptic neuron activity during a stimulation
3
trial lasting 0.5 seconds. At the end of the trial we read the synaptic state using an AER protocol
developed to this purpose. For each chosen value of the post-synaptic firing rate, we evaluate the
probability to find synapses in a potentiated (depressed) state repeating the test 50 times. The results
reported in figure 2 (solid lines) represent the average LTP and LTD probabilities per trail over the 31
synapses. Tests were performed both with active and inactive Calcium mechanism. When calcium
mechanism is inactive, the LTP is monotonically increasing with the post-synaptic firing rate while
when the calcium circuit is activated the LTP probability has a max form Vpost around 80 Hz.
Identical tests were also run in simulation (dashed curves in figure 2). For the purpose of a meaningful comparison with the chip behaviour relevant parameter affecting neural and synaptic dynamics
and their distributions (due to inhomogenities and mismatches) are characterized.
Simulated and measured data are in qualitative agreement. The parameters we chose for these tests
are the same used for the classification task described in the next paragraph.
Fraction of potentiated synapses
0.6
0.5
Experiment: solid line
Simulation: dashed line
w+
0.4
0.3
0.2
0.1
0
0
50
100
150
?post [Hz]
200
250
300
Figure 2: Transition probabilities. Red and blue lines are LTP probabilities with and without calcium stop-learning mechanism respectively. Gray lines are LTD probabilities without calcium stoplearning mechanism, the case LTD with Ca mechanism is not shown. Error bars are standard deviations over the 50 trials
4 Learning overlapping patterns
We configured the synaptic matrix to have a perceptron like network with 1 output and 32 inputs (32
AER synapses). 31 synapses are set as plastic excitatory ones, the 32nd is set as inhibitory and used
to modulate the post-synpatic neuron activity. Our aim is to teach the perceptron to classify two
patterns through a semi-supervised learning strategy: ?Up? and ?Down?. We expect that after learning the perceptron will respond with high output frequency for pattern ?Up? and with low output
frequency for pattern ?Down?. The self regulating Ca mechanism is exploited to improve performances when Up and Down patterns have a significant overlap. The learning is semi-supervised:
for each pattern a ?teacher? input is sent to the output neuron steering its activity to be high or low,
as desired. At the end of the learning period the ?teacher? is turned off and the perceptron output is
driven only by the input stimuli: in this conditions its classification ability is tested.
We present learning performances for input patterns with increasing overlap, and demonstrate the
effect of the stop learning mechanism (overlap ranging from 6 to 14 synapses).
Upon stimulation active pre-synaptic inputs are poisson spike trains at 70 Hz, while inactive inputs
are poisson spike trains at 10 Hz. Each trial lasts half a second. Up and Down patterns are randomly
presented with equal probability. The teaching signal, a combination of an excitatory constant cur4
rent and of an inhibitory AER spike train, forces the output firing rate to 50 or 0.5 Hz. One run lasts
for 150 trials which is sufficient for the stabilization of the output frequencies. At the end of each
trial we turn off the teaching signal, we freeze the synaptic dynamics and we read the state of each
synapse using an AER protocol developed for this purpose. In these conditions we performed a 5
seconds test (?Checking Phase?) to measure the perceptron frequencies when pattern Up or pattern
Down are presented. Each experiment includes 50 runs. For each run we change: a) the ?definition?
of patterns Up and Down: inputs activated by pattern Up and Down are chosen randomly at the
beginning of each run; b) the initial synaptic state, with the constraint that only about 30 % of the
synapses are potentiated; c) the stimulation sequence.
For the first experiment we turned off the stop learning mechanism and we chose orthogonal patterns.
In this case the perceptron was able to correctly classify the stimuli: after about 50 trials, choosing a
suitable threshold, one can discriminate the perceptron ouput to different patterns (lower left panel
on figure 4). The output frequency separation slightly increases until trial number 100 remaining
almost stable after that point.
We then studied the case of overlapped patterns both with active and inactive Calcium mechanism.
We repeated the experiment with an increasing overlap: 6, 10 and 14. (implying an increase in
the coding level from 0.5 for the orthogonal case to 0.7 for the overlap equal to 14). Only the
up
threshold Khigh
is active (the threshold above which up jumps are inhibnited). The Calcium circuit
up
parameters are tuned so that the Ca variable passes Khigh
for the mean firing rate of the postsynaptic neuron around 80 Hz. We show in figure 3 the distributions of the potentiated fraction of
the synapses over the 50 runs at different stages along the run for overlap 10 with inactive (upper
panels) and active (lower panels) calcium mechanism. We divided synapses in three subgroups: Up
(red) synapses with pre-synaptic input activated solely by Up pattern, Down (blue) synapses with
pre-synaptic inputs activated only by Down pattern, and Overlap (green) synapses with pre-synpatic
inputs activated by both pattern Up and Down. The state of the synapses is recorded after every
learning step. Accumulating statistics over the 50 runs we obtain the distributions reported in figure
3. The fraction of potentiated synapses is calculated over the number of synapses belonging to each
subgroup. When the stop learning mechanism is inactive, at the end of the experiment, the green
Ca mechanism inactive
trial 2
trial 50
P(w+)
1
trial 100
1
trial 150
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0.5
w+
1
0
0
0.5
w+
0
1
0
0.5
w+
1
0
synapses Overlap
synapses Up
synapses Down
0
0.5
w+
1
Ca mechanism active
trial 2
trial 50
P(w+)
1
trial 100
1
trial 150
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0.5
w+
1
0
0
0.5
w+
0
1
0
0.5
w+
1
0
synapses Overlap
synapses Up
synapses Down
0
0.5
w+
1
Figure 3: Distribution of the fraction of potentiated synapses. The number of inputs belonging to
both patterns is 10.
distribution of overlap synapses is broad, when the Calcium mechanism is active, synapses overlap
tend to be depotentiated. This result is the ?microscopic? effect of the stop learning mechanism:
once the number of potentiated synapses is sufficient to drive the perceptron output frequency above
80 Hz, the overlap synapses tend to be depotentiated. Overlap synapses would be pushed half of the
5
times to the potentiated state and half of the times to the depressed state, so that it is more likely for
the Up synapses to reach earlier the potentiated state. When the stop learning mechanism is active,
the potentiated synapses are enogh to drive the output neuron about 80 Hz, further potentiation is
inhibited for all synapses so that overlap synapses get depressed on average. This happens under the
condition that the transition probability are sufficiently small to avoid that at each trial the learning is
completely disrupted. The distribution of the output frequencies for increasing overlap is illustrated
in figure 4 (Ca mechanism inactive in the upper panels, active for the lower panels). The frequencies
are recorded during the ?checking phase?. In blue the histograms of the output frequency for the
down pattern, in red those for up pattern. It is clear from the figure that the output frequency
distribution remain well separated even for high overlap when the Calcium mechanism is active.
A quantitative parameter to describe the distribution separation is
?=
? up ? ? dn
??2up + ??2dn
(1)
? values are summarized in table 1.
Ca mechanism inactive
Overlap 0
Overlap 6
Overlap 10
Overlap 14
1
1
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
Pattern Down
P(?ck)
Pattern Up
0
100
?ck [Hz]
200
0
100
?ck [Hz]
200
0
100
?ck [Hz]
200
0
0
100
?ck [Hz]
200
Ca mechanism active
Overlap 0
Overlap 6
Overlap 10
Overlap 14
1
1
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
Pattern Down
P(?ck)
Pattern Up
0
0
100
?ck [Hz]
200
0
0
100
?ck [Hz]
0
200
0
100
?ck [Hz]
200
0
0
100
?ck [Hz]
200
Figure 4: Distributions of perceptron frequencies after learning two overlapped patterns. Blue bars
refer to pattern Down stimulation, red bars refers to pattern Up. Each panel refers to overlap.
Table 1: Discrimination power [seconds]
overlap 0 overlap 6 overlap 10 overlap 14
Ca OFF
4.39
1.87
1.59
0.99
Ca ON
5.29
2.20
1.88
1.66
For each run the number of potentiated synapses is different due to the random choices of Up, Down
and Overlap synapses for each run and the mismatches affecting the behavior of different synapses.
The failure of the discrimination for high overlap in the absence of this stop learning mechanism
is due to the fact that the number of potentiated synapses can overcome the effect of the teaching
signal for the down pattern. The Calcium mechanism, defining a maximum number of allowed
potentiated synapses, limits this problem. This offer the possibility of establishing a priori threshold
to discriminate the perceptron outputs on the basis of the frequency corresponding to the maximum
value of the LTP probability curve.
6
5 Conclusions
We briefly illustrate an analog VLSI chip implementing a network of 32 IF neurons and 2,048
reconfigurable, Hebbian, plastic, stop-learning synapses. Circuit parameters has been measured as
well as their dispersion across the chip. Using these data a chip-oriented simulation was set up and
its results, compared to experimental ones, demonstrate that circuits behavior follow the theoretical
predictions. Once configured the network as a perceptron (31 AER synapses and one output neuron),
a classification task has been performed. Stimuli with an increasing overlap have been used. The
results show the ability of the network to efficiently classify the presented patterns as well as the
improvement of the performances due to the calcium stop-learning mechanism.
References
[1] D.J. Amit and S. Fusi. Neural Computation, 6:957, 1994.
[2] D.J. Amit and G. Mongillo. Neural Computation, 15:565, 2003.
[3] D. Badoni, M. Giulioni, V. Dante, and P. Del Giudice. In Proc. IEEE International Symposium
on Circuits and Systems ISCAS06, pages 1227?1230, 2006.
[4] J.M. Brader, W. Senn, and S. Fusi. Neural Computation (in press), 2007.
[5] V. Dante, P. Del Giudice, and A. M. Whatley. The neuromorphic engineer newsletter. 2005.
[6] E. Chicca et al. IEEE Transactions on Neural Networks, 14(5):1297, 2003.
[7] S. Fusi. Biological Cybernetics, 87:459, 2002.
[8] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D.J. Amit. Neural Computation, 12:2227,
2000.
[9] S. Fusi and M. Mattia. Neural Computation, 11:633, 1999.
[10] P. Del Giudice, S. Fusi, and M. Mattia. Journal of Physiology Paris, 97:659, 2003.
[11] G. Indiveri. In Proc. IEEE International Symposium on Circuits and Systems, 2003.
[12] C. Mead. Analog VLSI and neural systems. Addison-Wesley, 1989.
[13] W. Senn and S. Fusi. Neural Computation, 17:2106, 2005.
7
| 3316 |@word trial:17 briefly:2 version:1 nd:1 simulation:6 solid:2 initial:1 efficacy:8 tuned:2 past:1 current:2 plasticity:2 v:1 implying:1 half:3 discrimination:2 device:5 beginning:1 short:3 characterization:1 provides:2 along:2 dn:2 profound:1 symposium:2 ouput:1 qualitative:1 vpre:2 paragraph:2 theoretically:2 behavior:2 nor:1 inspired:1 relying:1 increasing:7 erase:1 classifies:1 bounded:1 circuit:10 panel:6 what:1 developed:2 quantitative:1 every:1 act:1 bio:1 positive:1 timing:1 limit:1 aiming:1 consequence:1 mead:1 establishing:2 firing:6 solely:1 chose:2 studied:4 range:1 bi:4 implement:3 physiology:1 pre:10 refers:2 get:1 palimpsest:1 applying:1 accumulating:1 deterministic:1 go:1 flexibly:1 chicca:1 enhanced:1 trigger:1 target:1 substrate:1 trail:1 agreement:1 overlapped:2 element:6 bottom:1 role:1 dynamic:10 ultimately:1 tight:1 upon:3 completely:2 basis:1 easily:1 chip:15 various:1 train:7 separated:1 forced:3 describe:1 outside:1 choosing:1 lag:1 widely:1 plausible:1 otherwise:1 ability:4 statistic:2 noisy:1 hoc:1 sequence:2 whatley:1 adaptation:2 relevant:1 turned:2 poorly:1 intervening:1 description:1 requirement:2 derive:1 recurrent:7 illustrate:3 depending:3 measured:2 h3:3 borrowed:1 implemented:3 implies:1 come:1 met:1 correct:1 stochastic:8 stabilization:1 bistable:1 implementing:2 potentiation:3 behaviour:1 synpatic:2 biological:2 dendritic:1 extension:1 around:2 sufficiently:1 giulioni:3 mapping:1 pointing:1 purpose:3 proc:2 integrates:1 currently:1 vpost:6 brought:1 aim:1 ck:10 avoid:1 varying:1 emission:1 indiveri:1 improvement:2 logically:1 annunziato:1 inst:4 dependent:1 vlsi:5 italian:4 upward:2 classification:4 priori:1 initialize:1 equal:2 once:3 identical:1 broad:1 others:1 stimulus:4 report:2 inhibited:2 oriented:3 randomly:2 composed:1 simultaneously:1 national:4 preserve:1 vth2:1 phase:2 fire:4 arbitrated:1 regulating:4 highly:1 possibility:1 severe:1 behind:1 activated:7 orthogonal:2 tree:1 desired:1 theoretical:3 classify:4 earlier:1 compelling:1 neuromorphic:3 organizational:1 deviation:1 subset:1 successful:1 characterize:1 configurable:1 reported:2 connect:1 teacher:2 learnt:2 thanks:1 disrupted:1 international:2 off:4 compliant:1 quickly:1 concrete:1 infn:7 connectivity:4 recorded:2 possibly:1 external:5 potential:5 de:1 coding:1 summarized:1 includes:1 configured:3 ad:1 depends:1 performed:4 try:1 h1:2 red:4 mongillo:1 capability:1 efficiently:2 plastic:6 monitoring:1 drive:2 cybernetics:1 synapsis:48 reach:1 synaptic:46 definition:1 failure:1 frequency:16 associated:2 monitored:1 stop:14 sampled:1 emits:1 amplitude:1 actually:2 salamon:1 wesley:1 supervised:2 follow:1 synapse:12 stage:1 until:1 overlapping:4 del:4 undergoes:2 gray:1 name:1 effect:5 read:2 illustrated:1 during:2 self:3 illustrative:1 m:1 demonstrate:3 newsletter:1 ranging:1 instantaneous:1 recently:1 stimulation:4 spiking:4 overview:1 analog:6 silicon:1 refer:2 measurement:1 significant:1 freeze:1 teaching:3 depressed:7 stable:5 access:1 recent:1 italy:9 driven:4 termed:1 arbitrarily:1 vt:8 exploited:1 steering:1 paradigm:1 period:1 monotonically:1 dashed:2 semi:2 signal:3 hebbian:8 characterized:1 plausibility:1 offer:1 long:4 prescription:3 divided:1 post:15 prediction:2 essentially:2 poisson:5 histogram:1 represent:1 ion:1 affecting:2 posse:1 pass:1 hz:17 tend:2 undergo:1 ltp:7 sent:1 emitted:1 call:1 architecture:2 inactive:9 whether:1 motivated:1 ltd:5 depression:1 useful:1 detailed:4 clear:1 hosting:1 repeating:1 http:1 generate:1 inhibitory:6 senn:2 neuroscience:1 per:1 correctly:1 blue:4 probed:1 badoni:3 threshold:6 monitor:1 changing:1 neither:1 fraction:4 run:11 respond:1 almost:1 eligible:1 reader:1 electronic:2 separation:2 fusi:7 pushed:1 bound:2 distinguish:1 played:1 activity:8 aer:17 occur:1 constraint:3 constrain:1 unlimited:1 giudice:4 innovative:1 extremely:1 separable:1 according:1 combination:1 belonging:2 membrane:2 describes:1 slightly:1 remain:1 postsynaptic:5 across:1 appealing:1 biologically:1 modification:1 happens:1 lasting:1 explained:1 mattia:2 bus:1 turn:2 mechanism:30 needed:1 addison:1 subjected:1 end:4 adopted:1 endowed:3 appropriate:1 top:1 remaining:1 plausibly:1 dante:3 amit:3 leakage:1 already:2 realized:1 spike:24 strategy:3 concentration:1 microscopic:1 simulated:1 capacity:2 decoder:1 reason:1 besides:1 providing:1 regulation:1 teach:1 trace:1 implementation:5 design:1 calcium:16 potentiated:18 upper:2 neuron:31 dispersion:1 finite:1 defining:1 rome:9 dc:2 arbitrary:1 drift:2 introduced:1 paris:1 coherent:1 accepts:1 subgroup:2 able:1 bar:3 below:5 pattern:35 mismatch:2 summarize:1 max:1 memory:3 green:2 power:2 suitable:3 overlap:33 event:1 force:2 older:1 improve:1 imply:1 health:4 text:1 checking:2 expect:1 bear:1 limitation:1 proven:2 analogy:1 generator:1 h2:3 integrate:2 spoil:1 sufficient:2 principle:1 excitatory:4 last:5 perceptron:12 rm2:2 curve:2 depth:1 jdw:2 transition:2 calculated:1 overcome:1 made:1 adaptive:1 jump:8 far:1 transaction:1 emphasize:1 active:15 unnecessary:1 regulatory:2 decade:1 table:2 nature:1 ca:10 artificially:1 protocol:2 impinging:2 main:1 linearly:2 intracellular:1 noise:1 arrival:1 allowed:2 repeated:1 neuronal:1 i:1 candidate:2 rent:1 down:20 reconfigurable:2 decay:1 downward:1 likely:1 determines:1 modulate:1 targeted:1 rm1:3 towards:2 absence:2 change:5 specifically:1 engineer:1 discriminate:2 experimental:2 meaningful:1 internal:4 support:1 evaluate:1 tested:1 correlated:1 |
2,555 | 3,317 | A Bayesian LDA-based model for semi-supervised
part-of-speech tagging
Kristina Toutanova
Microsoft Research
Redmond, WA
kristout@microsoft.com
Mark Johnson
Brown University
Providence, RI
Mark Johnson@brown.edu
Abstract
We present a novel Bayesian model for semi-supervised part-of-speech tagging.
Our model extends the Latent Dirichlet Allocation model and incorporates the
intuition that words? distributions over tags, p(t|w), are sparse. In addition we introduce a model for determining the set of possible tags of a word which captures
important dependencies in the ambiguity classes of words. Our model outperforms the best previously proposed model for this task on a standard dataset.
1
Introduction
Part-of-speech tagging is a basic problem in natural language processing and a building block for
many components. Even though supervised part-of-speech taggers have reached performance of
over 97% on in-domain data [1, 2], the performance on unknown in-domain words is below 90%
and the performance on unknown out-of-domain words can be below 70% [3]. Additionally, few
languages have a large amount of data labeled for part-of-speech. Thus it is important to develop
methods that can use unlabeled data to learn part-of-speech. Research on unsupervised or partially
supervised part-of-speech tagging has a long history [4, 5]. Recent work includes [6, 7, 8, 9, 10].
As in most previous work on partially supervised part-of-speech tagging, our model takes as input
a (possibly incomplete) tagging dictionary, specifying, for some words, all of their possible parts
of speech, as well as a corpus of unlabeled text. Our model departs from recent work on semisupervised part-of-speech induction using sequence HMM-based models, and uses solely observed
context features to predict the tags of words. We show that using this representation of context gives
our model substantial advantage over standard HMM-based models.
There are two main innovations of our approach. The first is that we incorporate a sparse prior on the
distribution over tags for each word, p(t|w), and employ a Bayesian approach that maintains a distribution over parameters, rather than committing to a single parameter value. Previous approaches
to part-of-speech tagging ([9, 10]) also use sparse priors and Bayesian inference, but do not incorporate sparse priors directly on the p(t|w) distribution. Our results demonstrate that encoding this
sparse prior and employing a Bayesian approach contributes significantly to performance.
The second innovation of our approach is that we explicitly model ambiguity class (the set of part-ofspeech tags a word type can appear with). We show that this also results in substantial performance
improvement. Our model outperforms the best-performing previously proposed model for this task
[7], with an error reduction of up to 57% when the amount of supervision is small.
The task setting is more formally as follows. Assume we are given a finite set of possible part-ofspeech tags (labels) T = {t1 , t2 , . . . , tnT }. The set of part-of-speech tags for English we experiment
with has the 17 tags defined by Smith & Eisner [7], and is a coarse-grained version of the 45-tag set
in the English Penn Treebank. We are also given a dictionary which specifies the ambiguity classes
s ? T for a subset of the word types w. The ambiguity class of a word type is the set of all of its
1
?
?
s
?
u
?
?
m1 . . . m4
t
?1 . . . ?4
?1 . . . ? 4
T
T
?
M4
si
ui
mj,i
wi
?i
?i
ti,j
?k,?
ck,i,j
|
|
|
|
|
|
|
|
|
?
si
ui , ?j
mi , ?
?, si
?i
?i
?
ti,j , ?k
?
?
?
?
=
?
?
?
?
w
c1 . . . c4
W
L
M ULTI(?),
U NIFORM(si )
M ULTI(?j,ui ),
M ULTI(?mi ),
S UBSET(?, si ),
D IR(?i ),
M ULTI(?i ),
D IR(?),
M ULTI(?k,ti,j ),
i = 1, . . . , L
i = 1, . . . , L
i = 1, . . . , L, j = 1, . . . , 4
i = 1, . . . , L
i = 1, . . . , L
i = 1, . . . , L
i = 1, . . . , L, j = 1, . . . , Wi
k = 1, . . . , 4, ? = 1, . . . , T
i = 1, . . . , L, j = 1, . . . , Wi , k = 1, . . . , 4
Figure 1: A graphical model for the tagging model. In this model, each word type w is associated
with a set s of possible parts-of-speech (ambiguity class), and each of its tokens is associated with
a part-of-speech tag t, which generates the context words c surrounding that token. The ambiguity
class s also generates the morphological features m of the word type w via a hidden tag u ? s.
The dotted line divides the model into the ambiguity class model (on the left) and the word context
model (on the right).
possible tags. For example, the dictionary might specify that walks has the ambiguity class {N, V }
which means that walks can never have a tag which is not an N or a V. Additionally, we are given a
large amount of unlabeled natural language text. The task is to label each word token with its correct
part-of-speech tag in the corresponding context.
This task formulation corresponds to a problem in computational linguistics that frequently arises in
practice, because the only available resources for many languages consist of a manually constructed
dictionary and a text corpus. Note that it differs from the standard semi-supervised learning setting,
where we are given a small amount of labeled data and a large amount of unlabeled data. In the
setting we study, we are never given labeled data, but are given instead constraints on possible tags
of some words (in the form of a dictionary).1
2
Graphical model
Our model is shown in Figure 1. In the figure, T is the set of part-of-speech tags, L is the set of word
types (i.e., the set of different orthographic forms), W is the set of tokens (i.e., occurrences) of the
word type w, and M 4 is the set of four-element morphological feature vectors described below.
This is a generative model for a sequence of word tokens in a text corpus along with part-of-speech
tags for all tokens, ambiguity classes for word types and other hidden variables. To generate the
text corpus, the model generates the instances of every word type together with their contexts in
1
For some words, the dictionary specifies only one possible tag, e.g. information ? {N }, in which case all
instances of information can be assumed labeled with the tag N. However these constraints are not sufficient to
result in fully labeled sentences.
2
turn. The generation of a word type and all of its occurrences can be decomposed into two steps,
corresponding to the left and right parts of the model: the ambiguity class model, and the word
context model (separated by a dotted line in the figure).
For every word type wi ? L (plate L in the figure), in the first step the model generates an ambiguity
class si ? T of possible parts of speech. The ambiguity class si is the set of parts-of-speech that
tokens of wi can be labeled with. Our dictionary specifies si for some but not all word types wi .
The ambiguity class si is generated by a multinomial over 2T with parameters ?, with support on
the different values for s observed in the dictionary. The ambiguity class si for wi generates four
different morphological features m1,i , . . . , m4,i of wi representing the suffixes, capitalization, etc.,
of the orthographic form of wi . These are generated by multinomials with parameters ?1,u , . . . , ?4,u
respectively, where u ? s is a hidden variable generated by a uniform distribution over the members
of s. For completeness we generate the full surface form of the word type wi from a multinomial
distribution selected by its morphology features m1,i , . . . , m4,i . But since the morphology features
are always observed (they are determined by wi ?s orthographic form), we ignore this part of the
model. We discuss the ambiguity class model in detail in Section 3.1.
In the second step the word context model generates all instances wi,j of wi together with their
part-of-speech tags ti,j and context words (plate W in the figure). This is done by first choosing a
multinomial distribution ?i over the tags in the set si , which is drawn from a Dirichlet with parameters ?i and support si , where ?i,t = ?t for t ? s. That is, si identifies the subset of T to receive
support in ?i , but the value of ?i,t for t ? si is specified by ?t . Given these variables, all tokens
wi,j of the word wi together with their contexts are generated by first choosing a part-of-speech tag
ti,j from ?i and then choosing context words ck,i,j preceding and following the word token wi,j
according to tag-specific (depending on ti,j ) multinomial distributions. The context of a word token c1,i,j . . . , c4,i,j consists of the two preceding and two following words. For example, for the
sentence He often walks to school, the context words of that instance of walks are c1 =He, c2 =often,
c3 =to, and c4 =school. This representation of the context has been used previously by unsupervised
models for part-of-speech tagging in different ways [4, 8]. Each context word ck,i,j is generated
by a multinomial with parameters ?k,ti,j , where each ?k,t is in turn generated by a Dirichlet with
parameters ?. The parameters ?k,t are generated once for the whole corpus as indicated in the figure.
A sparse Dirichlet prior on ?i with parameter ? < 1 allows us to exploit the fact that most words
have a very frequent predominant tag, and their distribution over tags p(t|w) is sparse. To verify
this, we examined the distribution of the 17-label tag set in the WSJ Penn Treebank. A classifier
that always chooses the most frequent tag for every word type, without looking at context, is 90.9%
accurate on ambiguous words, indicating that the distribution is heavily skewed.
Our model builds upon the Latent Dirichlet Allocation (LDA) model [11] by extending it in several
ways. If we assume that the only possible ambiguity class s for all words is the set of all tags
(and thus remove the ambiguity class model because it becomes irrelevant), and if we simplify our
word context model to generate only one context word (say the word in position ?1), we would
end up with the LDA model. In this simplified model, we could say that for every word type wi
we have a document consisting of all word tokens that occur in position ?1 of the word type wi
in the corpus. Each context word ci,j in wi ?s document is generated by first choosing a tag (topic)
from a word (document) specific distribution ?i and then generating the word ci,j from a tag (topic)
specific multinomial. The LDA model incorporates the same kind of Dirichlet priors on ? and ? that
our model uses. The additional power of our model stems from the model of ambiguity classes si
which can take advantage of the information provided by the dictionary, and from the incorporation
of multiple context features.
Finally, we note that our model is deficient, because the same word token in the corpus is independently generated multiple times (e.g., each token will appear in the context of four other words and
will be generated four times). Even though this is a theoretical drawback of the model, it remains
to be seen whether correcting for this deficiency (e.g., by re-normalization) would improve tagging
performance. Models with similar deficiencies have been successful in other applications (e.g. the
model described in [12], which achieved substantial improvements over the previous state-of-the-art
in unsupervised parsing).
3
3
Parameter estimation and tag prediction
Here we discuss our method of estimating the parameters of our model and making predictions,
given an (incomplete) tagging dictionary and a set of natural language sentences.
We train the parameters of the ambiguity class model, ?, ?, and ?, separately from the parameters of
the word context model: ?,?,?, and ?. This is because the two parts of the model are connected only
via the variables si (the ambiguity classes of words), and when these ambiguity classes are given the
two sets of parameters are completely decoupled. The dictionary gives us labeled training examples
for the ambiguity class model, and we train the parameters of the ambiguity class model only from
this data (i.e., the word types in the dictionary). After training the ambiguity class model from the
dictionary we fix its parameters and estimate the word context model given these parameters.
3.1
Ambiguity class model: details and parameter estimation
Our ambiguity class model captures the strong regularities governing the possible tags of a word
type. Empirically we observe that the number of occurring ambiguity classes is very small relative
to the number of possible ambiguity classes. For example, in the WSJ Penn Treebank data, the
49, 206 word types belong to 118 ambiguity classes. Modeling these (rather than POS tags directly)
constrains the model to avoid assignments of tags to word tokens which would result in improbable
ambiguity classes for word types. A related intuition has been used in other contexts before, e.g.
[13, 14], but without directly modeling ambiguity classes. The ambiguity class model contributes
to biasing p(t|w) toward sparse distributions as well, because most ambiguity classes have very
few elements. For example, the top ten most frequent ambiguity classes in the complete dictionary
consist of one or two elements.
The ambiguity class of a word type can be predicted from its surface morphological features. For
example the suffix -s of walks indicates that an ambiguity class of {N, V } is likely for this word.
The four morphological features which we used for the ambiguity class model were: a binary feature
indicating whether the word is capitalized, a binary feature indicating whether the word contains a
hyphen, a binary feature indicating whether the word contains a digit character, and a nominal
feature indicating the suffix of a word. We define the suffix of a word to be the longest character
suffix (up to three letters) which occurs as a suffix of sufficiently many word types.2
We train the ambiguity class model on the set of word types present in the dictionary. We set the
multinomial parameters ?k,l and ? to maximize the joint likelihood of these word types and their
morphological features. Maximum likelihood estimation for ? is complicated by the hidden variable
ui which selects a tag form the ambiguity class with uniform distribution.
P
Q4
P (s, m1 , m2 , m3 , m4 |?, ?) = P (s|?) u?s P (u|s) j=1 P (mj |?j,l ).
We fix the probability P (u|s) = 1/|s| to the uniform distribution over tags in s. We estimate the ?
parameters using maximum likelihood estimation with add-1 (Laplace) smoothing and we train the
? parameters using EM (with add-1 smoothing in the M-step).
3.2
Parameter estimation for the word context model and prediction given complete
dictionary
We restrict our attention at first to the setting where a complete tagging dictionary is given. The
incomplete dictionary generalization is discussed in Section 3.3. When every word is in the dictionary, the ambiguity class si for each word type wi is specified by the tagging dictionary, and the
ambiguity class model becomes irrelevant. The relevant parameters of the model in this setting are
?,?,?, and ?. The contexts of word instances ck,i,j and the ambiguity classes si are observed.
We integrate over all hidden variables except the uniform Dirichlet parameters ? and ?. We set
? = 1 and we use Empirical Bayes to estimate ? by maximizing the likelihood of the observed data
given ? and the ambiguity classes si . Note that if the ambiguity classes si and ? are given, ?i is
fixed. Below we use c to denote the vector of all contexts of all word instances, and s the vector of
ambiguity classes for all word types. We use ? to denote the vector of all multinomials ?k,l , ? to
2
A suffix occurs with sufficiently many word types if its type-frequency rank is below 100.
4
denote the vector of all ?i and t to denote the vector of all tag sequences ti for word types wi . The
likelihood we would like to maximize is:
R
QL R
QWi PT Q4
L(c|s, ?, ?) = P (?|?) i=1 P (?i |?i ) j=1
?
P
(c
|?
)
d?i d?
i,l
k,i,j
k,l
l=1
k=1
P (?|?) =
Q4
k=1
QT
l=1
D IR(?k,l |?)
Since exact inference is intractable, we use a variational approximation to the posterior distribution
of the hidden variables given the data and maximize instead of the exact log-likelihood, a lower
bound given by the variational approximation. This variational approximation is also used for finding the most likely assignment of the part-of-speech tag variables to instances of words.
More specifically, the variational approximation has analogous form to the approximation used for
the LDA model [11]. It depends on variational parameters ?k,l , ?i , and ?i,j .
Q4 QT
QL
Q Wi
Q(?, ?, t|?, ?, ?) = k=1 l=1 D IR(?k,l |?k,l ) i=1 D IR(?i |?i ) j=1
P (ti,j |?i,j )
This distribution is an approximation to the posterior distribution of the hidden variables:
P (?, ?, t|c, s, ?, ?). As we can see, according to the Q distribution, the variables ?, ?, and t are
independent. Each ?k,l is distributed according to a Dirichlet distribution with variational parameters ?k,l , each ?i is also Dirichlet with parameters ?i and each tag ti,j is distributed according to a
multinomial ?i,j . We obtain the variational parameters by maximizing the following lower bound
on the log-likelihood of the data (the dependence of Q on the variational parameters is not shown
below for simplicity): EQ [log P (?, ?, t, c|s, ?, ?)] ? EQ [log Q(?, ?, t)]
We use an iterative maximization algorithm for finding the values of the variational parameters. We
do not describe it here due to space limitations, but it is analogous to the one used in [11]. Given
fixed variational parameters ?k,l we maximize with respect to the variational parameters ?i and
?i,j corresponding to word types and their instances. Then keeping the latter parameters fixed, we
maximize with respect to ?k,l . We repeat until the change in the variational bound falls below a
threshold. On our dataset, about 100 iterations of the outer loop for maximizing with respect to ?k,l
were necessary. Given a variational distribution Q we can maximize the lower bound on the loglikelihood with respect to ?. Since ? is determined by a single real-valued parameter, we maximized
with respect to ? using a simple grid search.
For predicting the tags ti,j of word tokens we use the same approximate posterior distribution Q.
Since according to Q all tags ti,j are independent given the variational parameters: Q(ti |?i ) =
QWi
j=1 (ti,j |?i,j ), finding the most likely assignment is straightforward.
3.3
Parameter estimation for the word context model and prediction with incomplete
dictionary
So far we have described the training of the parameters of the word context model in the setting
where for all words, the ambiguity classes si are known and these variables are observed. When
the ambiguity classes si are unknown for some words in the dataset, they become additional hidden
variables, and the hidden variables in the word context model become dependent on the morphological features mi and the parameters of the ambiguity class model. Denote the vector of ambiguity
classes for the known (in the dictionary) word types by sd and the ambiguity classes for the unknown word types by su . The posterior distribution over the hidden variables of interest given the
observations becomes: P (?, ?, t, su |sd , mu , c, ?, ?), where mu are the morphological features of
the unknown word types.
To perform inference in this setting we extend the variational approximation to account for the
additional hidden variables. Before we had, for every word type, a variational distribution over the
hidden variables corresponding to that word type:
QWi
P (ti,j |?i,j )
Q(?i , ti |?i , ?i,j ) = D IR(?i |?i ) j=1
We now introduce a variational distribution including new hidden variables si for unknown words.
QWi
P (ti,j |?i,j,si )
Q(?i , ti , si |mi , ?i,s , ?i,j,s ) = P (si |mi )D IR(?i |?i,si ) j=1
5
That is, for each possible ambiguity class si of an unknown word wi we introduce variational parameters specific to that ambiguity class. Instead of single variational parameters ?i and ?i,j for a
word with known si , we now have variational parameters {?i,s } and {?i,j,s } for all possible values s
of si . For simplicity, we use the probability P (si |mi ) = P (si |mi , ?, ?) from the morphology-based
ambiguity class model in the approximating distribution rather than introducing new variational
parameters and learning this distribution.3 We adapt the algorithm to estimate the variational parameters. The derivation is slightly complicated by the fact that si and ?i are not independent according
to Q (this makes sense because si determines the dimensionality of ?i ), but the derived iterative
algorithm is essentially the same as for our basic model, if we imagine that an unknown word type
wi occurs with each of its possible ambiguity classes si a fractional p(si |mi ) number of times.
For predicting tag assignments for words according to this extended model, we use the same algorithm as described in Section 3.2, for word types whose ambiguity classes si are known. For words
with unknown ambiguity classes, we need to maximize over ambiguity classes as well as tag assignments. We use the following algorithm to obtain a slightly better approximation than the one given
by the variational distribution Q. For each possible tag set si , we find the most likely assignment
of tags given that ambiguity class t? (si ), using the variational distribution as in the case of known
ambiguity classes. We then choose an ambiguity class and an assignment of tags according to:
s? = arg maxsi P (si |mi , ?, ?)P (t? (si ), ci |si , D, ?, ?) and t = t? (s? ).
We compute P (t? (si ), ci |si , D, ?, ?) by integrating with respect to the word context distributions
? whose approximate posterior given the data is Dirichlet with parameters ?k,l , and by integrating
with respect to ?i which are Dirichlet with parameters ? and dimensionality given by si .
4
Experimental Evaluation
We evaluate the performance of our model in comparison with other related models. We train and
evaluate the model in three different settings. In the first setting, a complete tagging dictionary is
available, and in the other two settings the coverage of the dictionary is greatly reduced.
The tagging dictionary was constructed by collecting for each word type, the set of parts-of-speech
with which it occurs in the annotated WSJ Penn Treebank, including the test set. This method of
constructing a tag dictionary is arguably unrealistic but has been used in previous research [7, 9, 6]
and provides a reproducible framework for comparing different models. In the complete dictionary
setting, we use the ambiguity class information for all words, and in the second and third setting we
remove from the dictionary all word types that have occurred with frequency less than 2 and less
than 3, respectively, in the test set of 1,005 sentences. The complete tagging dictionary contains
entries for 49, 206 words. The dictionary obtained with cutoff of 2 contains 2,141 words, and the
one with cutoff of 3 contains 1,249 words. We train the model on the whole (unlabeled) WSJ Penn
Treebank, consisting of 49,208 sentences. We evaluate performance on a set of 1,005 sentences,
which is a subset of the training data and is the same test set used by [7, 9].
To see how much removing information from the dictionary impacts the hardness of the problem we
can look at the accuracy of a classifier choosing a tag at random from the possible tags of words,
shown in the column Random of Table 1. Results for the three settings are shown in the three rows
of Table 1. In addition to the Random baseline, we include the results of a frequency baseline, Freq,
in which for each word, we choose the most frequent tag from its set of possible tags.4 This baseline
uses the same amount of partial supervision as our models. If labeled corpus data were available, a
model which assigns the most frequent tag to each word by using p?(t|w) would do much better.
The models in the table are:
LDA is the model proposed in this paper, excluding the ambiguity class model. The ambiguity class
model is irrelevant when a compete dictionary is available because all si are observed. In the other
two settings for the LDA model we assume that si is the complete ambiguity class (all 17 tags)
3
We also limit the number of possible ambiguity classes per word to the three most likely ones and renormalize the probability mass among them.
4
Frequency of tags is unigram frequency of tags p?(t) by token in the unlabeled data. Since the tokens in the
corpus are not actually labeled we compute the frequency by giving fractional counts to each possible tag of
words in the dictionary. Only the words present in the dictionary were used for computing p?(t).
6
Dictionary
coverage
complete
count ? 2
count ? 3
LDA
93.4
87.4
85.0
LDA
+ AC
93.4
91.2
89.7
PLSA
89.7
83.4
80.2
PLSA
+AC
89.7
87.8
85.9
CE (S&E)
Bayesian
ML
Random
+spelling HMM (G&G)HMM (G&G)
88.7 (91.9)
87.3
83.2
69.5
79.5 (90.3)
79.6
70.6
56.6
78.4 (89.5)
71.0
65.5
51.0
Freq
64.8
64.8
62.9
Table 1: Results from minimally supervised POS-tagging models.
for words which are not in the dictionary and do not attempt to predict a more specific ambiguity
class. The estimated parameter ? for the tag prior was 0.5 for the complete dictionary setting, and
0.2 for the other two settings, encouraging sparse distributions. For this model we estimate the
variational parameters ?k,l and the Dirichlet parameter ? to maximize the variational bound on the
log-likelihood of the word types which are in the dictionary only. We found that including unknown
word types was detrimental to performance.
LDA+AC is our full model including the model of ambiguity classes of words given their morphological features. As mentioned above, this augmented model differs from LDA only when the
dictionary is incomplete. We trained this model on all word types as discussed in Section 3.3. The
estimated ? parameters for this model in the three dictionary settings were 0.5, 0.1, and 0.1, respectively.
PLSA is the model analogous to LDA, which has the same structure as our word context model, but
excludes the Bayesian components. We include this model in the comparison in order to evaluate
the effect on performance of the sparse prior and the integration over model parameters. This model
is similar to the PLSA model for text documents [15]. The PLSA model does not have a prior on
the word-specific distributions over tags ?i = p(t|wi ) and it does not have a prior distribution on
the topic-specific multinomials for context words ?k,l . For this model we find maximum likelihood
estimates for these parameters by applying an EM algorithm. We do add-1 smoothing for ?k,l in the
M step, because even though this is not theoretically justified for this mixture model, it is frequently
used in practice and helps prevent probabilities of zero for possible events. PLSA does not include
the ambiguity class model for si and as in the LDA model, word types not in the dictionary were
assumed to have ambiguity classes containing all 17 tags. PLSA+AC extends the PLSA model by
the inclusion of the ambiguity class model.
CE+spelling (S&E) is the sequence model for semi-supervised part-of-speech tagging proposed in
[7], based on an HMM-structured model estimated using contrastive estimation. This is the stateof-the-art model for semi-supervised tagging using an incomplete dictionary. In the table we show
actual performance and oracle performance for this model (oracle performance is in brackets).The
oracle is obtained by testing models with different values of a smoothing hyper-parameter on the
test set and choosing the model with the best accuracy. Even though there is only one real-valued
hyper-parameter, the accuracies of models using different values can vary by nearly ten accuracy
points and it is thus more fair to compare our results to the non-oracle result, until a better criterion
for setting the hyper-parameters using only the partial supervision is found. The results shown in
the table are for a model which incorporates morphological features.
Bayesian HMM (G&G) is a fully Bayesian HMM model for semi-supervised part-of-speech tagging proposed in [9], which incorporates sparse Dirichlet priors on p(w|t) of word tokens given part
of speech tags and p(ti |ti?1 , ti?2 ) of transition probabilities in the HMM. We include this model
in the comparison, because it uses sparse priors and Bayesian inference as our LDA model, but
using a different structure of the model. [9] showed that this model outperforms significantly a
non-Bayesian HMM model, whose results we show as well.
ML HMM (G&G) is the maximum likelihood version of a trigram HMM for semi-supervised partof-speech tagging. Results for this model have been reported by other researchers as well [7, 6]. We
use the performance numbers reported in [9] because they have used the same data sets for testing.
The last two models do not use spelling (morphological) features. We should note that even though
the same amount of supervision in the form of a tagging dictionary is used by all compared models,
the HMM and CE models whose results are shown in the Table have been trained on less unsupervised natural language text: they have been trained using only the test set of 1,005 sentences.
However, there is no reason one should limit the amount of unlabeled data used and in addition,
7
other results reported in [7] and [9] show that accuracy does not seem to improve when more unlabeled data are used with these models.
There are several points to note about the experimental results. First, the fact that PLSA substantially
outperforms ML HMM (and even the Bayesian HMM) models shows that predicting the tags of
words from a window of neighboring word tokens and modeling the P (t|w) distribution directly
results in an advantage over HMMs with maximum likelihood or Bayesian estimation. This is
consistent with the success of other models that used word context for part-of-speech prediction in
different ways [4, 8]. Second, the Bayesian and sparse-prior components of our model do indeed
contribute substantially to performance, as illustrated by the performance of LDA compared to that
of PLSA. LDA achieves an error reduction of up to 36% over PLSA. Third, our ambiguity class
model results in a significant improvement as well; LDA+AC reduces the error of LDA by up to
31%. PLSA+AC similarly reduces the error of PLSA. Finally, our complete model outperforms the
state-of-the-art model CE+spelling. It reduces the error of the non-oracle models by up to 57% and
also outperforms the oracle models.
We compared the performance of our model to that of state-of-the-art models applied in the same
setting. It will also be interesting to compare our model to the one proposed in [8], which was
applied in a different partial supervision setting. In their setting a small set of example word types
(which they call prototypes) are provided for each possible tag (only three prototypes per tag were
specified). Their model achieved an accuracy of 82.2% on a similar dataset. We can not directly
compare the performance of our model to theirs, because our model would need prototypes for
every ambiguity class rather than for every tag. In future work we will explore whether a very small
set of prototypical ambiguity classes and corresponding word types can achieve the performance
we obtained with an incomplete tagging dictionary. Another interesting direction for future work
is applying our model to other NLP disambiguation tasks, such as named entity recognition and
induction of deeper syntactic or semantic structure, which could benefit from both our ambiguity
class model and our word context model.
References
[1] Kristina Toutanova, Dan Klein, and Christopher D. Manning. Feature-rich part-of-speech tagging with a
cyclic dependency network. In Proceedings of HLT-NAACL 03, 2003.
[2] Michael Collins. Discriminative training methods for hidden markov models: Theory and experiments
with perceptron algorithms. In EMNLP, 2002.
[3] John Blitzer, Ryan McDonald, and Fernando Pereira. Domain adaptation with structural correspondence
learning. In EMNLP, 2006.
[4] Hinrich Sch?utze. Distributional part-of-speech tagging. In EACL, 1995.
[5] Bernard Merialdo. Tagging english text with a probabilistic model. In ICASSP, 1991.
[6] Michele Banko and Robert C. Moore. Part of Speech tagging in context. In COLING, 2004.
[7] Noah A. Smith and Jason Eisner. Contrastive estimation: Training log-linear models on unlabeled data.
In ACL, 2005.
[8] Aria Haghighi and Dan Klein. Prototype-driven learning for sequence models. In HLT-NAACL, 2006.
[9] Sharon Goldwater and Thomas L. Griffiths. A fully Bayesian approach to unsupervised Part-of-Speech
tagging. In ACL, 2007.
[10] Mark Johnson. Why doesn?t EM find good HMM POS-taggers. In EMNLP, 2007.
[11] David Blei, Andrew Ng, and Michael Jordan. Latent dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[12] Dan Klein and Christopher D. Manning. Natural language grammar induction using a constituent-context
model. In NIPS 14, 2002.
[13] Jenny Rose Finkel, Trond Grenager, and Christopher Manning. Incorporating non-local information into
information extraction systems by Gibbs sampling. In ACL, 2005.
[14] Tetsuji Nakagawa and Yuji Matsumoto. Guessing parts-of-speech of unknown words using global information. In ACL, 2006.
[15] Thomas Hofmann. Probabilistic latent semantic analysis. In UAI, 1999.
8
| 3317 |@word version:2 plsa:13 contrastive:2 reduction:2 cyclic:1 contains:5 document:4 outperforms:6 com:1 comparing:1 si:48 parsing:1 john:1 hofmann:1 remove:2 reproducible:1 kristina:2 generative:1 selected:1 smith:2 blei:1 coarse:1 completeness:1 provides:1 contribute:1 tagger:2 along:1 constructed:2 c2:1 become:2 consists:1 dan:3 introduce:3 theoretically:1 tagging:28 indeed:1 hardness:1 frequently:2 morphology:3 decomposed:1 qwi:4 encouraging:1 actual:1 window:1 becomes:3 provided:2 estimating:1 mass:1 hinrich:1 kind:1 substantially:2 finding:3 every:8 collecting:1 ti:21 classifier:2 penn:5 appear:2 arguably:1 t1:1 before:2 local:1 sd:2 limit:2 encoding:1 solely:1 might:1 acl:4 minimally:1 examined:1 specifying:1 hmms:1 merialdo:1 testing:2 practice:2 block:1 orthographic:3 differs:2 partof:1 banko:1 digit:1 empirical:1 significantly:2 word:128 integrating:2 griffith:1 unlabeled:9 context:37 applying:2 maximizing:3 straightforward:1 attention:1 independently:1 simplicity:2 assigns:1 correcting:1 m2:1 laplace:1 analogous:3 pt:1 nominal:1 heavily:1 imagine:1 exact:2 us:4 element:3 recognition:1 distributional:1 labeled:9 observed:7 capture:2 connected:1 morphological:11 substantial:3 intuition:2 mentioned:1 mu:2 ui:4 constrains:1 rose:1 trained:3 eacl:1 upon:1 completely:1 po:3 joint:1 icassp:1 surrounding:1 train:6 separated:1 derivation:1 committing:1 describe:1 hyper:3 choosing:6 whose:4 valued:2 say:2 loglikelihood:1 grammar:1 grenager:1 syntactic:1 sequence:5 advantage:3 adaptation:1 frequent:5 neighboring:1 relevant:1 loop:1 achieve:1 constituent:1 regularity:1 extending:1 generating:1 wsj:4 help:1 depending:1 develop:1 ac:6 blitzer:1 andrew:1 qt:2 school:2 eq:2 strong:1 coverage:2 predicted:1 direction:1 drawback:1 correct:1 annotated:1 capitalized:1 fix:2 generalization:1 ryan:1 sufficiently:2 predict:2 trigram:1 dictionary:44 vary:1 achieves:1 utze:1 estimation:9 label:3 hyphen:1 always:2 rather:4 ck:4 avoid:1 finkel:1 derived:1 improvement:3 longest:1 rank:1 indicates:1 likelihood:11 greatly:1 baseline:3 sense:1 inference:4 dependent:1 suffix:7 hidden:14 selects:1 arg:1 among:1 stateof:1 art:4 smoothing:4 integration:1 once:1 never:2 extraction:1 ng:1 sampling:1 manually:1 look:1 unsupervised:5 nearly:1 future:2 t2:1 simplify:1 few:2 employ:1 m4:5 consisting:2 microsoft:2 attempt:1 interest:1 evaluation:1 predominant:1 mixture:1 bracket:1 accurate:1 partial:3 necessary:1 improbable:1 decoupled:1 incomplete:7 divide:1 walk:5 re:1 renormalize:1 theoretical:1 instance:8 column:1 modeling:3 assignment:7 maximization:1 introducing:1 subset:3 entry:1 uniform:4 successful:1 johnson:3 reported:3 providence:1 dependency:2 chooses:1 yuji:1 probabilistic:2 michael:2 together:3 ambiguity:74 trond:1 containing:1 choose:2 possibly:1 emnlp:3 account:1 includes:1 explicitly:1 depends:1 jason:1 reached:1 bayes:1 maintains:1 complicated:2 ir:7 accuracy:6 maximized:1 goldwater:1 bayesian:15 researcher:1 history:1 hlt:2 frequency:6 associated:2 mi:9 dataset:4 fractional:2 dimensionality:2 niform:1 actually:1 supervised:11 specify:1 formulation:1 done:1 though:5 governing:1 until:2 christopher:3 su:2 lda:18 indicated:1 michele:1 semisupervised:1 building:1 effect:1 naacl:2 brown:2 verify:1 moore:1 freq:2 illustrated:1 semantic:2 skewed:1 ulti:5 ambiguous:1 criterion:1 plate:2 complete:10 demonstrate:1 mcdonald:1 variational:26 novel:1 multinomial:11 empirically:1 extend:1 belong:1 he:2 m1:4 discussed:2 occurred:1 theirs:1 significant:1 gibbs:1 grid:1 similarly:1 inclusion:1 language:7 had:1 supervision:5 surface:2 etc:1 add:3 posterior:5 recent:2 showed:1 irrelevant:3 driven:1 binary:3 success:1 seen:1 additional:3 preceding:2 maximize:8 fernando:1 semi:7 jenny:1 full:2 multiple:2 reduces:3 stem:1 adapt:1 long:1 impact:1 prediction:5 basic:2 essentially:1 iteration:1 normalization:1 achieved:2 c1:3 receive:1 addition:3 justified:1 separately:1 haghighi:1 sch:1 capitalization:1 deficient:1 member:1 incorporates:4 seem:1 jordan:1 call:1 structural:1 restrict:1 prototype:4 whether:5 speech:34 amount:8 ten:2 reduced:1 generate:3 specifies:3 dotted:2 estimated:3 per:2 klein:3 four:5 threshold:1 drawn:1 prevent:1 cutoff:2 ce:4 sharon:1 excludes:1 compete:1 letter:1 named:1 extends:2 disambiguation:1 bound:5 correspondence:1 oracle:6 occur:1 noah:1 constraint:2 incorporation:1 deficiency:2 ri:1 tag:64 generates:6 performing:1 structured:1 according:8 manning:3 slightly:2 em:3 character:2 wi:25 making:1 resource:1 previously:3 remains:1 turn:2 discus:2 count:3 end:1 available:4 observe:1 occurrence:2 thomas:2 top:1 dirichlet:14 linguistics:1 include:4 nlp:1 graphical:2 exploit:1 giving:1 eisner:2 build:1 approximating:1 occurs:4 dependence:1 spelling:4 guessing:1 detrimental:1 entity:1 hmm:15 outer:1 topic:3 toward:1 induction:3 reason:1 innovation:2 ql:2 robert:1 unknown:11 perform:1 observation:1 markov:1 matsumoto:1 finite:1 extended:1 looking:1 excluding:1 tnt:1 david:1 specified:3 c3:1 sentence:7 c4:3 nip:1 redmond:1 below:7 biasing:1 including:4 power:1 unrealistic:1 event:1 natural:5 predicting:3 representing:1 improve:2 identifies:1 text:8 prior:13 determining:1 relative:1 fully:3 generation:1 limitation:1 allocation:3 interesting:2 prototypical:1 integrate:1 ubset:1 sufficient:1 consistent:1 treebank:5 row:1 token:19 repeat:1 last:1 keeping:1 english:3 deeper:1 perceptron:1 fall:1 sparse:13 distributed:2 benefit:1 transition:1 rich:1 doesn:1 simplified:1 employing:1 far:1 approximate:2 ignore:1 ml:3 global:1 uai:1 q4:4 corpus:9 assumed:2 discriminative:1 search:1 latent:4 iterative:2 why:1 table:7 additionally:2 learn:1 mj:2 contributes:2 constructing:1 domain:4 main:1 whole:2 fair:1 augmented:1 position:2 pereira:1 third:2 coling:1 grained:1 removing:1 departs:1 specific:7 unigram:1 consist:2 intractable:1 toutanova:2 incorporating:1 ci:4 aria:1 occurring:1 likely:5 explore:1 partially:2 corresponds:1 determines:1 change:1 determined:2 except:1 specifically:1 nakagawa:1 bernard:1 experimental:2 m3:1 indicating:5 formally:1 support:3 mark:3 latter:1 arises:1 collins:1 incorporate:2 evaluate:4 ofspeech:2 |
2,556 | 3,318 | Reinforcement Learning in Continuous Action Spaces
through Sequential Monte Carlo Methods
Alessandro Lazaric Marcello Restelli Andrea Bonarini
Department of Electronics and Information
Politecnico di Milano
piazza Leonardo da Vinci 32, I-20133 Milan, Italy
{bonarini,lazaric,restelli}@elet.polimi.it
Abstract
Learning in real-world domains often requires to deal with continuous state and
action spaces. Although many solutions have been proposed to apply Reinforcement Learning algorithms to continuous state problems, the same techniques can
be hardly extended to continuous action spaces, where, besides the computation of
a good approximation of the value function, a fast method for the identification of
the highest-valued action is needed. In this paper, we propose a novel actor-critic
approach in which the policy of the actor is estimated through sequential Monte
Carlo methods. The importance sampling step is performed on the basis of the
values learned by the critic, while the resampling step modifies the actor?s policy.
The proposed approach has been empirically compared to other learning algorithms into several domains; in this paper, we report results obtained in a control
problem consisting of steering a boat across a river.
1
Introduction
Most of the research on Reinforcement Learning (RL) [13] has studied solutions to finite Markov
Decision Processes (MDPs). On the other hand, learning in real-world environments requires to
deal with continuous state and action spaces. While several studies focused on problems with continuous states, little attention has been deserved to tasks involving continuous actions. Although
several tasks may be (suboptimally) solved by coarsely discretizing the action variables (for instance using the tile coding approach [11, 12]), a different approach is required for problems in
which high-precision control is needed and actions slightly different from the optimal one lead to
very low utility values. In fact, since RL algorithms need to experience each available action several
times to estimate its utility, using very fine discretizations may be too expensive for the learning
process. Some approaches, although using a finite set of target actions, deal with this problem by
selecting real-valued actions obtained by interpolation of the available discrete actions on the basis
of their utility values [9, 14]. Despite of this capability, the learning performance of these algorithms
relies on strong assumptions about the shape of the value function that are not always satisfied in
highly non-linear control problems. The wire fitting algorithm [2] (later adopted also in [4]) tries to
solve this problem by implementing an adaptive interpolation scheme in which a finite set of pairs
haction, valuei is modified in order to better approximate the action value function.
Besides having the capability of selecting any real-valued action, RL algorithms for continuous action problems should be able to efficiently find the greedy action, i.e., the action associated to the
highest estimated value. Differently from the finite MDP case, a full search in a continuous action
space to find the optimal action is often unfeasible. To overcome this problem, several approaches
limit their search over a finite number of points. In order to keep low this number, many algorithms
(e.g., tile coding and interpolation-based) need to make (often implicit) assumptions about the shape
of the value function. To overcome these difficulties, several approaches have adopted the actor-
critic architecture [7, 10]. The key idea of actor-critic methods is to explicitly represent the policy
(stored by the actor) with a memory structure independent of the one used for the value function
(stored by the critic). In a given state, the policy followed by the agent is a probability distribution
over the action space, usually represented by parametric functions (e.g., Gaussians [6], neural networks [14], fuzzy systems [5]). The role of the critic is, on the basis of the estimated value function,
to criticize the actions taken by the actor, which consequently modifies its policy through a stochastic gradient on its parameter space. In this way, starting from a fully exploratory policy, the actor
progressively changes its policy so that actions that yield higher utility values are more frequently
selected, until the learning process converges to the optimal policy. By explicitly representing the
policy, actor-critic approaches can efficiently implement the action selection step even in problems
with continuous action spaces.
In this paper, we propose to use a Sequential Monte Carlo (SMC) method [8] to approximate the
sequence of probability distributions implemented by the actor, thus obtaining a novel actor-critic
algorithm called SMC-learning. Instead of a parametric function, the actor represents its stochastic
policy by means of a finite set of random samples (i.e., actions) that, using simple resampling and
moving mechanisms, is evolved over time according to the values stored by the critic. Actions are
initially drawn from a prior distribution, and then they are resampled according to an importance
sampling estimate which depends on the utility values learned by the critic. By means of the resampling and moving steps, the set of available actions gets more and more thick around actions with
larger utilities, thus encouraging a detailed exploration of the most promising action-space regions,
and allowing SMC-learning to find real continuous actions. It is worth pointing out that the main
goal here is not an accurate approximation of the action-value function on the whole action space,
but to provide an efficient way to converge to the continuous optimal policy. The main characteristics of the proposed approach are: the agent may learn to execute any continuous action, the action
selection phase and the search for the action with the best estimated value are computationally efficient, no assumption on the shape of the value function is required, the algorithm is model-free, and
it may learn to follow also stochastic policies (needed in multi-agent problems).
In the next section, we introduce basic RL notation and briefly discuss issues about learning with
continuous actions. Section 3 details the proposed learning approach (SMC-Learning), explaining
how SMC methods can be used to learn in continuous action spaces. Experimental results are
discussed in Section 4, and Section 5 draws conclusions and contains directions for future research.
2
Reinforcement Learning
In reinforcement learning problems, an agent interacts with an unknown environment. At each
time step, the agent observes the state, takes an action, and receives a reward. The goal of the
agent is to learn a policy (i.e., a mapping from states to actions) that maximizes the long-term
return. An RL problem can be modeled as a Markov Decision Process (MDP) defined by a quadruple hS, A, T , R, ?i, where S is the set of states, A(s) is the set of actions available in state s,
T : S ?A?S ? [0, 1] is a transition distribution that specifies the probability of observing a certain
state after taking a given action in a given state, R : S ?A ? ? is a reward function that specifies the
instantaneous reward when taking a given action in a given state, and ? ? [0, 1) is a discount factor.
The policy of an agent is characterized by a probability distribution ?(a|s) that specifies the probability of taking action a in state s. The utility of taking action a in state
s and following a policy ? thereP
?
t?1
after is formalized by the action-value function Q? (s, a) = E
rt |s = s1 , a = a1 , ? ,
t=1 ?
where r1 = R(s, a). RL approaches aim at learning the policy that maximizes the action-value function in each state. The optimalPaction-value function can be computed by solving the Bellman equation: Q? (s, a) = R(s, a)+? s? T (s, a, s? ) maxa? Q? (s? , a? ). The optimal policy can be defined as
the greedy action in each state: ? ? (a|s) is equal to 1/|arg maxa Q? (s, a)| if a ? arg maxa Q? (s, a),
and 0 otherwise.
Temporal Difference (TD) algorithms [13] allows the computation of Q? (s, a) by direct interaction
with the environment. Given the tuple hst , at , rt , st+1 , at+1 i (i.e., the experience performed by the
agent), at each step, action values may be estimated by online algorithms, such as SARSA, whose
update rule is:
Q(st , at ) ? (1 ? ?)Q(st , at ) + ?u(rt , at+1 , st+1 ),
(1)
where ? ? [0, 1] is a learning rate and u(rt , at+1 , st+1 ) = rt + ?Q(st+1 , at+1 ) is the target utility.
Although value-function approaches have theoretical guarantees about convergence to the optimal
policy and have been proved to be effective in many applications, they have several limitations:
algorithms that maximize the value function cannot solve problems whose solutions are stochastic policies (e.g., multi-agent learning problems); small errors in the estimated value of an action
may lead to discontinuous changes in the policy [3], thus leading to convergence problems when
function approximators are considered. These problems may be overcome by adopting actor-critic
methods [7] in which the action-value function and the policy are stored into two distinct representations. The actor typically represents the distribution density over the action space through a
function ?(a|s, ?), whose parameters ? are updated in the direction of performance improvement,
as established by the critic on the basis of its approximation of the value function, which is usually
computed through an on-policy TD algorithm.
3
SMC-Learning for Continuous Action Spaces
SMC-learning is based on an actor-critic architecture, in which the actor stores and updates, for
each state s, a density distribution ? t (a|s) that specifies the agent?s policy at time instant t. At
the beginning of the learning process, without any prior information about the problem, the actor
usually considers a uniform distribution over the action space, thus implementing a fully exploratory
policy. As the learning process progresses, the critic collects data for the estimation of the value
function (in this paper, the critic estimates the action-value function), and provides the actor with
information about which actions are the most promising. On the other hand, the actor changes its
policy to improve its performance and to progressively reduce the exploration in order to converge
to the optimal deterministic policy. Instead of using parametric functions, in SMC-learning the
actor represents its evolving stochastic policy by means of Monte Carlo sampling. The idea is the
following: for each state s, the set of available actions A(s) is initialized with N samples drawn
from a proposal distribution ? 0 (a|s):
A(s) = {a1 , a2 , ? ? ? , aN },
ai ? ? 0 (a|s).
Each sampled action ai is associated to an importance weight wi ? W(s) whose value is initialized
to 1/N , so that the prior density can be approximated as
0
? (a|s) ?
N
X
wi ? ?(a ? ai ),
i=1
where ai ? A(s), wi ? W(s), and ? is the Dirac delta measure. As the number of samples goes
to infinity this representation gets equivalent to the functional description of the original probability
density function. This means that the actor can approximately follow the policy specified by the
density ? 0 (a|s), by simply choosing actions at random from A(s), where the (normalized) weights
are the selection probabilities. Given the continuous action-value function estimated by the critic
and chosen a suitable exploration strategy (e.g., the Boltzmann exploration), it is possible to define
the desired probability distribution over the continuous action space, usually referred to as the target distribution. As long as the learning process goes on, the action values estimated by the critic
become more and more reliable, and the policy followed by the agent should change in order to
choose more frequently actions with higher utilities. This means that, in each state, the target distribution changes according to the information collected during the learning process, and the actor
must consequently adapt its approximation.
In general, when no information is available about the shape of the target distribution, SMC methods can be effectively employed to approximate sequences of probability distributions by means of
random samples, which are evolved over time exploiting importance sampling and resampling techniques. The idea behind importance sampling is to modify the weights of the samples to account
for the differences between the target distribution p(x) and the proposal distribution ?(x) used to
generate the samples. By setting each weight wi proportional to the ratio p(xi )/?(xi ), the discrete
PN
weighted distribution i=1 wi ? ?(x ? xi ) better approximates the target distribution. In our context,
the importance sampling step is performed by the actor, which modifies the weights of the actions
according to their utility values estimated by the critic. When some samples have very small or very
large normalized weights, it follows that the target density significantly differs from the proposal
density used to draw the samples. From a learning perspective, this means that the set of available
Algorithm 1 SMC-learning algorithm
for all s ? S do
Initialize A(s) by drawing N samples from ? 0 (a|s)
Initialize W(s) with uniform values: wi = 1/N
end for
for each time step t do
Action Selection
PN
Given the current state st , the actor selects action at from A(st ) according to ? t (a|s) =
i=1 wi ? ?(a ? ai )
Critic Update
Given the reward rt and the utility of next state st+1 , the critic updates the action value Q(st , at )
Actor Update
Given the action-value function, the actor updates the importance weights
if the weights have a high variance then
the set A(st ) is resampled
end if
end for
actions contains a number of samples whose estimated utility is very low. To avoid this, the actor has to modify the set of available actions by resampling new actions from the current weighted
approximation of the target distribution.
In SMC-learning, SMC methods are included into a learning algorithm that iterates through three
main steps (see Algorithm 1): the action selection performed by the actor, the update of the actionvalue function managed by the critic, and finally the update of the policy of the actor.
3.1
Action Selection
One of the main issues of learning in continuous action spaces is to determine which is the best action
in the current state, given the (approximated) action-value function. Actor-critic methods effectively
solve this problem by explicitly storing the current policy. As previously described, in SMC-learning
the actor performs the action selection step by taking one action at random among those available
in the current state. The probability of extraction of each action is equal to its normalized weight
P r(ai |s) = wi . The time complexity of the action selection phase for SMC-learning is logarithmic
in the number of actions samples.
3.2
Critic Update
While the actor determines the policy, the critic, on the basis of the collected rewards, computes
an approximation of the action-value function. Although several function approximation schemes
could be adopted for this task (e.g., neural networks, regression tress, support-vector machines), we
use a simple solution: the critic stores an action value, Q(s, ai ), for each action available in state s
(like in tabular approaches) and modifies it according to TD update rules (see Equation 1). Using
on-policy algorithms, such as SARSA, the time complexity of the critic update is constant (i.e., does
not depend on the number of available actions).
3.3
Actor Update
The core of SMC-learning is represented by the update of the policy distribution performed by the
actor. Using the importance sampling principle, the actor modifies the weights wi , thus performing
a policy improvement step based on the action values computed by the critic. In this way, actions
with higher estimates get more weight. Several RL schemes could be adopted to update the weights.
In this paper, we focus on the Boltzmann exploration strategy [13].
The Boltzmann exploration strategy privileges the execution of actions with higher estimated utility
values. The probabilities computed by the Boltzmann exploration can be used as weights for the
available actions. At time instant t, the weight of action ai in state s is updated as follows:
wit+1
=
wit P
e
N
j=1
?Qt+1 (s,ai )
?
wj e
?Qt+1 (s,aj )
?
,
(2)
where ?Qt+1 (s, ai ) = Qt+1 (s, ai ) ? Qt (s, ai ), and the parameter ? (usually referred as to temperature) specifies the exploration degree: the higher ? , the higher the exploration.
Once the weights have been modified, the agent?s policy has changed. Unfortunately, it is not
possible to optimally solve continuous action MDPs by exploring only a finite set of actions sampled
from a prior distribution, since the optimal action may not be available. Since the prior distribution
used to initialize the set of available actions significantly differs from the optimal policy distribution,
after a few iterations, several actions will have negligible weights: this problem is known as the
weight degeneracy phenomenon [1]. Since the number of samples should be kept low for efficiency
reasons, having actions associated with very small weights means to waste learning parameters for
approximating both the policy and the value function in regions of the action space that are not
relevant with respect to the optimal policy. Furthermore, long learning time is spent to execute
and update utility values of actions that are not likely to be optimal. Therefore, following the SMC
approach, after the importance sampling phase, a resampling step may be needed in order to improve
the distribution of the samples on the action domain. The degeneracy phenomenon can be measured
through the effective sample size [8], which, for each state s, can be estimated by
bef f (s) = X1
N
,
(3)
wi2
wi ?W(s)
bef f (s) is always less than the number of actions contained
where wi is the normalized weight. N
b
in A(s), and low values of Nef f (s) reveal high degeneracy. In order to avoid high degeneracy,
bef f (s) and the
the actions are resampled whenever the ratio between the effective sample size N
number of samples N falls below some given threshold ?. The goal of resampling methods is to
replace samples with small weights, with new samples close to samples with large weights, so that
the discrepancy between the resampled weights is reduced. The new set of samples is generated by
resampling (with replacement) N times from the following discrete distribution
?(a|s) =
N
X
wi ? ?(a ? ai ),
(4)
i=1
so that samples with high weights are selected many times. Among the several resampling approaches that have been proposed, here we consider the systematic resampling scheme, since it can
be easily implemented, takes O(N ) time, and minimizes the Monte Carlo variance (refer to [1] for
more details). The new samples inherit the same action values of their parents, and the sample
weights are initialized using the Boltzmann distribution.
Although the resampling step reduces the degeneracy, it introduces another problem known as sample impoverishment. Since samples with large weights are replicated several times, after a few
resampling steps a significant number of samples could be identical. Furthermore, we need to learn
over a continuous space, and this cannot be carried out using a discrete set of fixed samples; in
fact, the learning agent would not be able to achieve the optimal policy whenever the initial set of
available actions in state s (A(s)) does not contain the optimal action of that state. This limitation
may be overcome by means of a smoothing step, that consists of moving the samples according to a
continuous approximation ? ? (a|s, wi ) of the posterior distribution . The approximation is obtained
by using a weighted mean of kernel densities:
N
1X
a ? ai
? ? (a|s, wi ) =
wi K
,
(5)
h i=1
h
where h > 0 is the kernel bandwidth. Typical choices for the kernel densities are Gaussian kernels
and Epanechnikov kernels. However, these kernels produce over-dispersed posterior distributions,
and this negatively affects the convergence speed of the learning process, especially when a few
samples are used. Here, we propose to use uniform kernels:
(ai?1 ? ai ) (ai+1 ? ai )
Ki (a) = U
;
.
(6)
2
2
As far as boundary samples are concerned (i.e., a1 and aN ), their corresponding kernel is set
to K1 (a) = U [(a1 ? a2 ); (a2 ? a1 )/2] and KN (a) = U [(aN ?1 ? aN )/2; (aN ? aN ?1 )] respectively, thus preserving the possibility to cover the whole action domain. Using these (nonoverlapped) kernel densities, each sample is moved locally within an interval which is determined
by its distances from the adjacent samples, thus achieving fast convergence.
200
180
Parameter
fc
I
sM AX
sD
p
quay
Zs width
Zv width
?
160
viability zone
140
?
120
100
quay
80
60
40
current
20
0
0
20
40
60
80
100
120
140
160
180
Figure 1: The boat problem.
Value
1.25
0.1
2.5
1.75
0.9
(200, 110)
0.2
20
Alg.
All
All
SARSA
SMC
SMC
Cont.-QL
Param.
?0 /??
?
?0 /??
?
?0 /??
?/??
Value
0.5/0.01
0.99
3.0/0.0001
0.95
25.0/0.0005
0.4/0.005
200
Table 1: The dynamics parameters.
Table 2: The learning parameters.
Besides reducing the dispersion of the samples, this resampling scheme implements, from the critic
perspective, a variable resolution generalization approach. Since the resampled actions inherit the
action value associated to their parent, the learned values are generalized over a region whose width
depends on the distance between samples. As a result, at the beginning of the learning process, when
the actions are approximately uniformly distributed, SMC-learning performs broad generalization,
thus boosting the performance. On the other hand, when the learning is near convergence the available actions tend to group around the optimal action, thus automatically reducing generalization
which may prevent the learning of the optimal policy (see [12]).
4
Experiments
In this section, we show experimental results with the aim of analyzing the properties of SMClearning and to compare its performance with other RL approaches. Additional experiments on a
mini-golf task and on the swing-up pendulum problem are reported in Appendix.
4.1
The Boat Problem
To illustrate how the SMC-learning algorithm works and to assess its effectiveness with respect to
approaches based on discretization, we used a variant of the boat problem introduced in [5]. The
problem is to learn a controller to drive a boat from the left bank to the right bank quay of a river,
with a strong non-linear current (see Figure 1). The boat?s bow coordinates, x and y, are defined
in the range [0, 200] and the controller sets the desired direction U over the range [?90? , 90? ]. The
dynamics of the boat?s bow coordinates is described by the following equations:
xt+1
yt+1
=
=
min(200, max(0, xt + st+1 cos(?t+1 )))
min(200, max(0, yt ? st?1 sin(?t+1 ) ? E(xt+1 )))
x
x 2
where the effect of the current is defined by E(x) = fc 50
? 100
, where fc is the force of the
current, and the boat angle ?t and speed st are updated according to the desired direction Ut+1 as:
?t+1
?t+1
st+1
?t+1
=
=
=
=
?t + I?t+1
?t + ((?t+1 ? ?t )(st+1 /sM AX ))
st + (sD ? st )I
min(max(p(Ut+1 ? ?t ), ?45? ), 45? )
where I is the system inertia, sM AX is the maximum speed allowed for the boat, sD is the speed
goal, ? is the rudder angle, and p is a proportional coefficient used to compute the rudder angle in
order to reach the desired direction Ut .
The reward function is defined on three bank zones. The success zone Zs corresponds to the quay,
the viability zone Zv is defined around the quay, and the failure zone Zf in all the other bank points.
Therefore, the reward function is defined as:
?
(x, y) ? Zs
? +10
?
D(x,y) (x, y) ? Zv
R(x, y) =
(7)
(x, y) ? Zf
?
? -10
0
otherwise
QL-Continuous, Tile coding, SMC-learning
10
8
8
Total Reward
Total Reward
Sarsa vs SMC-learning
10
6
SMC-learning (5 samples)
SMC-learning (10 samples)
Sarsa (5 actions)
Sarsa (10 actions)
Sarsa (20 actions)
Sarsa (40 actions)
4
2
0
0
20
40
60
Episodes (x1000)
80
6
4
2
SMC-learning (10 samples)
QL-Continuous (40 actions)
Tile coding (80 actions)
0
100
0
20
40
60
80
100
Episodes (x1000)
Figure 2: Performance comparison between SMC-learning and SARSA (left), SMC-learning and
tile coding and Continuous Q-learning (right)
where D is a function that gives a reward decreasing linearly from 10 to -10 relative to the distance from the success zone. In the experiment, each state variable is discretized in 10 intervals
and the parameters of the dynamics are those listed in Table 1. At each trial, the boat is positioned
at random along the left bank in one of the points shown in Figure 1. In the following, we compare the results obtained with four different algorithms: SARSA with Boltzmann exploration with
different discretizations of the action space, SARSA with tile coding (or CMAC) [12], Continuous
Q-learning [9], and SMC-learning. The learning parameters of each algorithm are listed in Table 2. 1
Figure 2-left compares the learning performance (in terms of total reward per episode) for SARSA
with 5, 10, 20, and 40 evenly distributed actions to the results obtained by SMC-learning with
5 and 10 samples. As it can be noticed, the more the number of actions available the better the
performance of SARSA is. With only 5 actions (one action each 36? ), the paths that the controller
can follow are quite limited and the quay is not reachable from any of the starting point. As a
result, the controller learned by SARSA achieves a very poor performance. On the other hand, a
finer discretization allows the boat to reach more frequently the quay, even if it takes about three
times the number of episodes to converge with respect to the case with 5 actions. As it can be
noticed, SMC-learning with 5 samples outperforms SARSA with 5 and 10 actions both in terms of
performance and in convergence time. In fact, after few trials, SMC-learning succeeds to remove
the less-valued samples and to add new samples in regions of the action space where higher rewards
can be obtained. As a result, not only it can achieve better performance than SARSA, but it does
not spend time exploring useless actions, thus improving also the convergence time. Nonetheless,
with only 5 samples the actor stores a very roughly approximated policy, which, as a consequence
of resampling, may converge to actions that do not obtain a performance as good as that of SARSA
with 20 and 40 actions. By increasing the number of samples from 5 to 10, SMC-learning succeeds
in realizing a better coverage of the action space, and obtains equivalent performance as SARSA
with 40 actions. At the same time, while the more actions available, the more SARSA takes to
converge, the convergence time of SMC-learning, as in the case with 5 samples, benefits from the
initial resampling, thus taking less than one sixth of the trials needed by SARSA to converge.
Figure 2-right shows the comparison of the performance of SMC-learning, SARSA with tile coding
using two tilings and a resolution of 2.25? (equivalent to 80 actions), and Continuous Q-learning
with 40 actions. We omit the results with fewer actions because both tile coding and Continuous
Q-learning obtain poor performance. As it can be noticed, SMC-learning outperforms both the
compared algorithms. In particular, the generalization over the action space performed by tile coding negatively affects the learning performance because of the non-linearity of the dynamics of the
system. In fact, when only few actions are available, two adjacent actions may have completely different effects on the dynamics and, thus, receive different rewards. Generalizing over these actions
prevents the agent from learning which is the best action among those available. On the other hand,
as long as the samples get closer, SMC-learning dynamically reduces its generalization over the ac1
?x is the decreasing rate for parameter x, whose value after N trials is computed as x(N ) =
x(0)
1+?x N
.
tion space, so that their utility can be more accurately estimated. Similarly, Continuous Q-learning
is strictly related to the actions provided by the designer and to the implicit assumption of linearity
of the action-value function. As a result, although it could learn any real-valued action, it does not
succeed in obtaining the same performance as SMC-learning even with the quadruple of actions. In
fact, the capability of SMC-learning to move samples towards more rewarding regions of the action
space allows the agent to learn more effective policies even with a very limited number of samples.
5
Conclusions
In this paper, we have described a novel actor-critic algorithm to solve continuous action problems.
The algorithm is based on a Sequential Monte Carlo approach that allows the actor to represent
the current policy through a finite set of available actions associated to weights, which are updated
using the utility values computed by the critic. Experimental results show that SMC-learning is
able to identify the highest valued actions through a process of importance sampling and resampling. This allows SMC-learning to obtain better performance with respect to static solutions such
as Continuous Q-learning and tile coding even with a very limited number of samples, thus improving also the convergence time. Future research activity will follow two main directions: extending
SMC-learning to problems in which no good discretization of the state space is a priori known, and
experimenting in continuous action multi-agent problems.
References
[1] M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp. A tutorial on particle
filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. on Signal Processing,
50(2):174?188, 2002.
[2] Leemon C. Baird and A. Harry Klopf. Reinforcement learning with high-dimensional, continuous actions. Technical Report WL-TR-93-117, Wright-Patterson Air Force Base Ohio:
Wright Laboratory, 1993.
[3] D.P. Bertsekas and J.N. Tsitsiklis. Neural Dynamic Programming. Athena Scientific, Belmont,
MA, 1996.
[4] Chris Gaskett, David Wettergreen, and Alexander Zelinsky. Q-learning in continuous state and
action spaces. In Australian Joint Conference on Artificial Intelligence, pages 417?428, 2003.
[5] L. Jouffe. Fuzzy inference system learning by reinforcement methods. IEEE Trans. on Systems,
Man, and Cybernetics-PART C, 28(3):338?355, 1998.
[6] H. Kimura and S. Kobayashi. Reinforcement learning for continuous action using stochastic
gradient ascent. In 5th Intl. Conf. on Intelligent Autonomous Systems, pages 288?295, 1998.
[7] V. R. Konda and J. N. Tsitsiklis. Actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143?1166, 2003.
[8] J. S. Liu and E. Chen. Sequential monte carlo methods for dynamical systems. Journal of
American Statistical Association, 93:1032?1044, 1998.
[9] Jose Del R. Millan, Daniele Posenato, and Eric Dedieu. Continuous-action q-learning. Machine Learning, 49:247?265, 2002.
[10] Jan Peters and Stefen Schaal. Policy gradient methods for robotics. In Proceedings of the IEEE
International Conference on Intelligent Robotics Systems (IROS), pages 2219?2225, 2006.
[11] J. C. Santamaria, R. S: Sutton, and A. Ram. Experiments with reinforcement learning in
problems with continuous state and action spaces. Adaptive Behavior, 6:163?217, 1998.
[12] Alexander A. Sherstov and Peter Stone. Function approximation via tile coding: Automating
parameter choice. In SARA 2005, LNAI, pages 194?205. Springer Verlag, 2005.
[13] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT
Press, Cambridge, MA, 1998.
[14] Hado van Hasselt and Marco Wiering. Reinforcement learning in continuous action spaces. In
2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning,
pages 272?279, 2007.
| 3318 |@word h:1 trial:4 briefly:1 tr:1 initial:2 liu:1 contains:2 electronics:1 selecting:2 outperforms:2 hasselt:1 current:10 discretization:3 must:1 belmont:1 shape:4 remove:1 progressively:2 update:15 resampling:16 v:1 greedy:2 selected:2 fewer:1 intelligence:1 beginning:2 realizing:1 core:1 epanechnikov:1 provides:1 iterates:1 boosting:1 along:1 direct:1 become:1 symposium:1 consists:1 fitting:1 rudder:2 introduce:1 roughly:1 behavior:1 frequently:3 andrea:1 multi:3 discretized:1 bellman:1 decreasing:2 td:3 automatically:1 little:1 encouraging:1 param:1 increasing:1 provided:1 notation:1 linearity:2 maximizes:2 evolved:2 minimizes:1 fuzzy:2 maxa:3 z:3 kimura:1 guarantee:1 temporal:1 sherstov:1 control:4 omit:1 bertsekas:1 negligible:1 kobayashi:1 modify:2 sd:3 limit:1 consequence:1 despite:1 sutton:2 analyzing:1 quadruple:2 path:1 interpolation:3 approximately:2 studied:1 dynamically:1 collect:1 sara:1 co:1 limited:3 smc:41 range:2 implement:2 differs:2 jan:1 cmac:1 discretizations:2 evolving:1 significantly:2 get:4 unfeasible:1 cannot:2 selection:8 close:1 context:1 equivalent:3 deterministic:1 yt:2 modifies:5 go:2 attention:1 starting:2 focused:1 politecnico:1 wit:2 formalized:1 resolution:2 rule:2 exploratory:2 coordinate:2 autonomous:1 updated:4 target:9 programming:2 ac1:1 expensive:1 approximated:3 role:1 solved:1 wiering:1 region:5 wj:1 episode:4 highest:3 observes:1 alessandro:1 environment:3 complexity:2 reward:13 dynamic:7 depend:1 solving:1 maskell:1 negatively:2 patterson:1 efficiency:1 arulampalam:1 basis:5 completely:1 eric:1 easily:1 joint:1 differently:1 represented:2 leemon:1 distinct:1 fast:2 effective:4 monte:7 artificial:1 choosing:1 whose:7 quite:1 larger:1 valued:6 solve:5 spend:1 drawing:1 otherwise:2 neil:1 online:2 sequence:2 propose:3 interaction:1 piazza:1 relevant:1 bow:2 achieve:2 description:1 moved:1 milan:1 dirac:1 exploiting:1 convergence:9 parent:2 r1:1 extending:1 produce:1 intl:1 converges:1 spent:1 illustrate:1 tim:1 andrew:1 measured:1 qt:5 progress:1 strong:2 implemented:2 hst:1 coverage:1 australian:1 direction:6 thick:1 discontinuous:1 filter:1 stochastic:6 exploration:10 milano:1 implementing:2 generalization:5 sarsa:21 exploring:2 strictly:1 marco:1 around:3 considered:1 wright:2 mapping:1 pointing:1 achieves:1 a2:3 estimation:1 wl:1 weighted:3 mit:1 always:2 gaussian:2 aim:2 modified:2 pn:2 avoid:2 barto:1 ax:3 focus:1 schaal:1 improvement:2 experimenting:1 bef:3 inference:1 typically:1 lnai:1 initially:1 selects:1 issue:2 arg:2 among:3 impoverishment:1 priori:1 smoothing:1 initialize:3 equal:2 once:1 having:2 extraction:1 sampling:9 identical:1 represents:3 broad:1 marcello:1 future:2 tabular:1 report:2 discrepancy:1 gordon:1 intelligent:2 few:5 richard:1 phase:3 consisting:1 replacement:1 privilege:1 highly:1 possibility:1 introduces:1 behind:1 accurate:1 tuple:1 closer:1 experience:2 initialized:3 desired:4 theoretical:1 santamaria:1 instance:1 cover:1 uniform:3 too:1 optimally:1 stored:4 reported:1 kn:1 st:18 density:10 international:1 river:2 siam:1 automating:1 systematic:1 rewarding:1 sanjeev:1 satisfied:1 x1000:2 choose:1 zelinsky:1 tile:11 conf:1 american:1 leading:1 return:1 account:1 harry:1 coding:11 waste:1 coefficient:1 baird:1 explicitly:3 depends:2 performed:6 later:1 try:1 tion:1 observing:1 pendulum:1 capability:3 simon:1 ass:1 air:1 variance:2 characteristic:1 efficiently:2 yield:1 identify:1 identification:1 bayesian:1 accurately:1 carlo:7 worth:1 drive:1 finer:1 cybernetics:1 bonarini:2 reach:2 whenever:2 sixth:1 failure:1 nonetheless:1 associated:5 di:1 static:1 degeneracy:5 sampled:2 proved:1 ut:3 positioned:1 higher:7 follow:4 execute:2 furthermore:2 implicit:2 until:1 hand:5 receives:1 nonlinear:1 del:1 aj:1 reveal:1 scientific:1 mdp:2 effect:2 normalized:4 contain:1 managed:1 swing:1 laboratory:1 deal:3 adjacent:2 sin:1 during:1 width:3 daniele:1 generalized:1 stone:1 performs:2 temperature:1 instantaneous:1 novel:3 ohio:1 functional:1 empirically:1 rl:8 discussed:1 association:1 approximates:1 refer:1 significant:1 cambridge:1 ai:18 similarly:1 particle:1 reachable:1 moving:3 actor:39 add:1 base:1 posterior:2 perspective:2 italy:1 store:3 certain:1 verlag:1 discretizing:1 success:2 approximators:1 preserving:1 additional:1 steering:1 employed:1 converge:6 maximize:1 determine:1 signal:1 full:1 reduces:2 technical:1 characterized:1 adapt:1 long:4 a1:5 involving:1 basic:1 regression:1 variant:1 controller:4 iteration:1 represent:2 adopting:1 kernel:9 hado:1 robotics:2 proposal:3 receive:1 fine:1 interval:2 ascent:1 tend:1 effectiveness:1 near:1 viability:2 concerned:1 affect:2 architecture:2 bandwidth:1 reduce:1 idea:3 golf:1 utility:16 peter:2 hardly:1 action:152 detailed:1 listed:2 discount:1 locally:1 reduced:1 generate:1 specifies:5 tutorial:1 designer:1 estimated:13 lazaric:2 delta:1 per:1 discrete:4 elet:1 coarsely:1 zv:3 key:1 group:1 four:1 threshold:1 achieving:1 drawn:2 prevent:1 iros:1 kept:1 ram:1 angle:3 jose:1 draw:2 decision:2 appendix:1 ki:1 resampled:5 followed:2 activity:1 infinity:1 leonardo:1 speed:4 min:3 performing:1 department:1 according:8 poor:2 across:1 slightly:1 wi:15 s1:1 taken:1 computationally:1 equation:3 previously:1 discus:1 mechanism:1 needed:5 end:3 adopted:4 available:21 gaussians:1 tiling:1 apply:1 quay:7 deserved:1 original:1 instant:2 konda:1 k1:1 especially:1 approximating:1 move:1 noticed:3 parametric:3 strategy:3 rt:6 interacts:1 gradient:3 distance:3 athena:1 evenly:1 chris:1 considers:1 collected:2 reason:1 besides:3 suboptimally:1 modeled:1 cont:1 mini:1 ratio:2 useless:1 ql:3 unfortunately:1 policy:45 unknown:1 boltzmann:6 allowing:1 zf:2 wire:1 dispersion:1 markov:2 sm:3 finite:8 polimi:1 extended:1 introduced:1 david:1 pair:1 required:2 specified:1 clapp:1 learned:4 established:1 trans:2 able:3 usually:5 below:1 dynamical:1 criticize:1 wi2:1 reliable:1 memory:1 max:3 suitable:1 difficulty:1 force:2 boat:11 representing:1 scheme:5 improve:2 mdps:2 carried:1 vinci:1 prior:5 relative:1 fully:2 limitation:2 proportional:2 agent:16 degree:1 principle:1 bank:5 storing:1 critic:31 changed:1 free:1 tsitsiklis:2 explaining:1 fall:1 taking:6 distributed:2 benefit:1 overcome:4 boundary:1 van:1 world:2 transition:1 computes:1 inertia:1 reinforcement:12 adaptive:2 replicated:1 far:1 approximate:4 obtains:1 keep:1 xi:3 continuous:38 search:3 nef:1 table:4 promising:2 learn:8 tress:1 obtaining:2 improving:2 alg:1 domain:4 da:1 inherit:2 main:5 linearly:1 whole:2 restelli:2 allowed:1 x1:1 referred:2 precision:1 xt:3 sequential:5 effectively:2 importance:10 execution:1 chen:1 generalizing:1 logarithmic:1 fc:3 simply:1 likely:1 prevents:1 contained:1 actionvalue:1 tracking:1 springer:1 corresponds:1 determines:1 relies:1 dispersed:1 ma:2 succeed:1 goal:4 consequently:2 towards:1 replace:1 man:1 change:5 included:1 typical:1 determined:1 reducing:2 uniformly:1 called:1 total:3 experimental:3 succeeds:2 klopf:1 zone:6 millan:1 support:1 alexander:2 phenomenon:2 |
2,557 | 3,319 | Adaptive Online Gradient Descent
Elad Hazan
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120
hazan@us.ibm.com
Peter L. Bartlett
Division of Computer Science
Department of Statistics
UC Berkeley
Berkeley, CA 94709
bartlett@cs.berkeley.edu
Alexander Rakhlin ?
Division of Computer Science
UC Berkeley
Berkeley, CA 94709
rakhlin@cs.berkeley.edu
Abstract
We study the rates of growth of the regret in online convex optimization. First,
we show that a simple extension of the algorithm of Hazan et al eliminates the
need for a priori knowledge of the lower bound on the second derivatives of the
observed functions. We then provide an algorithm, Adaptive Online Gradient
Descent, which interpolates between the results of Zinkevich for linear functions
and of Hazan
et al for strongly convex functions, achieving intermediate rates
?
between T and log T . Furthermore, we show strong optimality of the algorithm.
Finally, we provide an extension of our results to general norms.
1
Introduction
The problem of online convex optimization can be formulated as a repeated game between a player
and an adversary. At round t, the player chooses an action xt from some convex subset K of Rn ,
and then the adversary chooses a convex loss function ft . The player aims to ensure that the total
PT
PT
loss, t=1 ft (xt ), is not much larger than the smallest total loss t=1 ft (x) of any fixed action x.
The difference between the total loss and its optimal value for a fixed action is known as the regret,
which we denote
RT =
T
X
ft (xt ) ? min
x?K
t=1
T
X
ft (x).
t=1
Many problems of online prediction of individual sequences can be viewed as special cases of online
convex optimization, including prediction with expert advice, sequential probability assignment,
and sequential investment [1]. A central question in all these cases is how the regret grows with the
number of rounds of the game.
?
Zinkevich [2] considered the following gradient descent algorithm, with step size ?t = ?(1/ t).
(Here, ?K (v) denotes the Euclidean projection of v on to the convex set K.)
?
Corresponding author.
1
Algorithm 1 Online Gradient Descent (OGD)
1: Initialize x1 arbitrarily.
2: for t = 1 to T do
3:
Predict xt , observe ft .
4:
Update xt+1 = ?K (xt ? ?t+1 ?ft (xt )).
5: end for
?
Zinkevich showed that the regret of this algorithm grows as T , where T is the number of rounds
of the game. This rate cannot be improved in general for arbitrary convex loss functions. However,
this is not the case if the loss functions are uniformly convex, for instance, if all ft have second
derivative at least H > 0. Recently, Hazan et al [3] showed that in this case it is possible for the
regret to grow only logarithmically with T , using the same algorithm but with the smaller step size
?t = 1/(Ht). Increasing convexity makes online convex optimization easier.
The algorithm that achieves logarithmic regret must know in advance a lower bound on the convexity
of the loss functions, since this bound is used to determine the step size. It is natural to ask if this is
essential: is there an algorithm that can adapt to the convexity of the loss functions
?and achieve the
same regret rates in both cases?O(log T ) for uniformly convex functions and O( T ) for arbitrary
convex functions? In this paper, we present an adaptive algorithm of this kind.
The key technique is regularization: We consider the online gradient descent (OGD) algorithm,
but we add a uniformly convex function, the quadratic ?t kxk2 , to each loss function ft (x). This
corresponds to shrinking the algorithm?s actions xt towards the origin. It leads to a regret bound of
the form
T
X
RT ? c
?t + p(?1 , . . . , ?T ).
t=1
The first term on the right hand side can be viewed as a bias term; it increases with ?t because
the presence of the regularization might lead the algorithm away from the optimum. The second
term is a penalty for the flatness of the loss functions that becomes smaller as the regularization
increases. We show that choosing the regularization coefficient ?t so as to balance these two terms
in the bound
? on the regret up to round t is nearly optimal in a strong sense. Not only does this choice
give the T and log T regret rates in the linear and uniformly convex cases, it leads to a kind of
oracle inequality: The regret is no more than a constant factor times the bound on regret that would
have been suffered if an oracle had provided in advance the sequence of regularization coefficients
?1 , . . . , ?T that minimizes the final regret bound.
To state this result precisely, we introduce the following definitions. Let K be a convex subset of
Rn and suppose that supx?K kxk ? D. For simplicity, throughout the paper we assume that K is
centered around 0, and, hence, 2D is the diameter of K. Define a shorthand ?t = ?ft (xt ). Let Ht
be the largest value such that for any x? ? K,
?
ft (x? ) ? ft (xt ) + ?>
t (x ? xt ) +
Ht ?
kx ? xt k2 .
2
(1)
In particular, if ?2 ft ? Ht ?P
I 0, then the abovePinequality is satisfied. Furthermore, suppose
t
t
k?t k ? Gt . Define ?1:t := s=1 ?s and H 1:t := s=1 Hs . Let H 1:0 = 0. Let us now state the
Adaptive Online Gradient Descent algorithm as well as the theoretical guarantee for its performance.
Algorithm 2 Adaptive Online Gradient Descent
1: Initialize x1 arbitrarily.
2: for t = 1 to T do
3:
Predict xt , observe
ft .
q
4:
Compute ?t =
1
2
2
(H 1:t + ?1:t?1 ) +
8G2t /(3D2 )
?1
5:
Compute ?t+1 = (H 1:t + ?1:t ) .
6:
Update xt+1 = ?K (xt ? ?t+1 (?ft (xt ) + ?t xt )).
7: end for
2
? (H 1:t + ?1:t?1 ) .
Theorem 1.1. The regret of Algorithm 2 is bounded by
RT ? 3
inf
?
??
1 ,...,?T
D
2
??1:T
+
T
X
(Gt + ?? D)2
t
t=1
H 1:t + ??1:t
!
.
While Algorithm 2 is stated with the squared Euclidean norm as a regularizer, we show that it
is straightforward to generalize our technique to other regularization functions that are uniformly
convex with respect to other norms. This leads to adaptive versions of the mirror descent algorithm
analyzed recently in [4, 5].
2
Preliminary results
The following theorem gives a regret bound for the OGD algorithm with a particular choice of step
size. The virtue of the theorem is that the step size can be set without knowledge of the uniform
lower bound on Ht , which is required in the original algorithm of [3]. The proof is provided in
Section 4 (Theorem 4.1), where the result is extended to arbitrary norms.
Theorem 2.1. Suppose we set ?t+1 =
1
H 1:t .
Then the regret of OGD is bounded as
T
RT ?
1 X G2t
.
2 t=1 H 1:t
In particular, loosening the bound,
2RT ?
maxt G2t
log T.
Pt
mint 1t s=1 Hs
Note that nothing prevents Ht from being negative or zero, implying that the same algorithm gives
logarithmicP
regret even when some of the functions are linear or concave, as long as the partial
t
averages 1t s=1 Hs are positive and not too small. The above result already provides an important
extension to the log-regret algorithm of [3]: no prior knowledge on the uniform convexity of the
functions is needed, and the bound is in terms of the observed sequence {Ht }. Yet, there is still a
Pt
problem with the algorithm. If H1 > 0 and Ht = 0 for all t >?1, then s=1 Hs = H1 , resulting
In the
in a linear regret bound. However, we know from [2] that a O( T ) bound can be obtained.
?
next section we provide an algorithm which interpolates between O(log T ) and O( T ) bound on
the regret depending on the curvature of the observed functions.
3
Adaptive Regularization
Suppose the environment plays a sequence of ft ?s with curvature Ht ? 0. Instead of performing
gradient descent on these functions, we step in the direction of the gradient of f?t (x) = ft (x) +
1
2
2 ?t kxk , where the regularization parameter ?t ? 0 is chosen appropriately at each step as a
function of the curvature of the previous functions. We remind the reader that K is assumed to be
centered around the origin, for otherwise we would instead use kx ? x0 k2 to shrink the actions xt
towards the origin x0 . Applying Theorem 2.1, we obtain the following result.
Theorem 3.1. If the Online Gradient Descent algorithm is performed on the functions f?t (x) =
ft (x) + 21 ?t kxk2 with
1
?t+1 =
H 1:t + ?1:t
for any sequence of non-negative ?1 , . . . , ?T , then
T
RT ?
1 2
1 X (Gt + ?t D)2
D ?1:T +
.
2
2 t=1 H 1:t + ?1:t
3
Proof. By Theorem 2.1 applied to functions f?t ,
!
T
T
T
X
X
1
1
1 X (Gt + ?t D)2
2
2
ft (xt ) + ?t kxt k ? min
ft (x) + ?t kxk +
.
x
2
2
2 t=1 H 1:t + ?1:t
t=1
t=1
Indeed, it is easy to verify that condition (1) for ft implies the corresponding statement with H?t =
Ht +?t for f?t . Furthermore, by linearity, the bound on the gradient of f?t is Gt +?t kxt k ? Gt +?t D.
PT
Define x? = arg minx t=1 ft (x). Then, dropping the kxt k2 terms and bounding kx? k2 ? D2 ,
T
X
ft (xt ) ?
t=1
T
T
X
1
1 X (Gt + ?t D)2
,
ft (x? ) + D2 ?1:T +
2
2 t=1 H 1:t + ?1:t
t=1
which proves the the theorem.
The following inequality is important in the rest of the analysis, as it allows us to remove the dependence on ?t from the numerator of the second sum at the expense of increased constants. We
have
!
T
T
2
X
1
(G
+
?
D)
1 2
1X
2G2t
2?2t D2
t
t
2
D ?1:T +
? D ?1:T +
+
2
H 1:t + ?1:t
2
2 t=1 H 1:t + ?1:t
H 1:t + ?1:t?1 + ?t
t=1
T
?
X
G2t
3 2
D ?1:T +
,
2
H 1:t + ?1:t
t=1
(2)
where the first inequality holds because (a + b)2 ? 2a2 + 2b2 for any a, b ? R.
?
It turns out that for appropriate choices of {?t }, the above theorem recovers the O( T ) bound on the
regret for linear functions [2] and the O(log T ) bound for strongly convex functions [3]. Moreover,
under specific assumptions on the sequence
{Ht }, we can define a sequence {?t } which produces
?
intermediate rates between log T and T . These results are exhibited in corollaries at the end of
this section.
Of course, it would be nice to be able to choose {?t } adaptively without any restrictive assumptions on {Ht }. Somewhat surprisingly, such a choice can be made near-optimally by simple loPT
cal balancing. Observe that the upper bound of Eq. (2) consists of two sums: D2 t=1 ?t and
PT
G2t
t=1 H 1:t +?1:t . The first sum increases in any particular ?t and the other decreases. While the
influence of the regularization parameters ?t on the first sum is trivial, the influence on the second
sum is more involved as all terms for t ? t0 depend on ?t0 . Nevertheless, it turns out that a simple
choice of ?t is optimal to within a multiplicative factor of 2. This is exhibited by the next lemma.
Lemma 3.1. Define
HT ({?t }) = HT (?1 . . . ?T ) = ?1:T +
T
X
t=1
Ct
,
H 1:t + ?1:t
Ct
H 1:t +?1:t
HT ({??t }).
where Ct ? 0 does not depend on ?t ?s. If ?t satisfies ?t =
HT ({?t }) ? 2 inf
?
{?t }?0
for t = 1, . . . , T , then
Proof. We prove this by induction. Let {??t } be the optimal sequence of non-negative regularization
coefficients. The base of the induction is proved by considering two possibilities: either ?1 < ??1 or
not. In the first case, ?1 + C1 /(H1 + ?1 ) = 2?1 ? 2??1 ? 2(??1 + C1 /(H1 + ??1 )). The other case
is proved similarly.
Now, suppose
HT ?1 ({?t }) ? 2HT ?1 ({??t }).
Consider two possibilities. If ?1:T < ??1:T , then
HT ({?t }) = ?1:T +
T
X
t=1
Ct
= 2?1:T ? 2??1:T ? 2HT ({??t }).
H 1:t + ?1:t
4
If, on the other hand, ?1:T ? ??1:T , then
Ct
Ct
Ct
Ct
?
? 2 ?T +
.
?T +
=2
?2
H 1:T + ?1:T
H 1:T + ?1:T
H 1:T + ??1:T
H 1:T + ??1:T
Using the inductive assumption, we obtain
HT ({?t }) ? 2HT ({??t }).
The lemma above is the key to the proof of the near-optimal bounds for Algorithm 2 1 .
Proof. (of Theorem 1.1)
By Eq. 2 and Lemma 3.1,
T
X
3
G2t
RT ? D2 ?1:T +
? ? inf ?
?1 ,...,?T
2
H 1:t + ?1:t
t=1
?6
inf
?
??
1 ,...,?T
1 2 ?
1
D ?1:T +
2
2
T
X
3D
(Gt + ??t D)2
H 1:t + ??1:t
t=1
2
??1:T
+2
T
X
t=1
G2t
H 1:t + ??1:t
!
!
,
provided the ?t are chosen as solutions to
3 2
G2t
D ?t =
.
(3)
2
H 1:t + ?1:t?1 + ?t
It is easy to verify that
q
1
2
2
2
(H 1:t + ?1:t?1 ) + 8Gt /(3D ) ? (H 1:t + ?1:t?1 )
?t =
2
is the non-negative root of the above quadratic equation. We note that division by zero in Algorithm 2
occurs only if ?1 = H1 = G1 = 0. Without loss of generality, G1 6= 0, for otherwise x1 is
minimizing f1 (x) and regret is negative on that round.
Hence, the algorithm has a bound on the performance which is 6 times the bound obtained by the
best offline adaptive choice of regularization coefficients. While the constant 6 might not be optimal,
it can be shown that a constant strictly larger than one is unavoidable (see previous footnote).
We also remark that if the diameter D is unknown, the regularization coefficients ?t can still be
chosen by balancing as in Eq. (3), except without the D2 term. This choice of ?t , however, increases
the bound on the regret suffered by Algorithm 2 by a factor of O(D2 ).
Let us now consider some special cases and show that Theorem 1.1 not only recovers the rate of
increase of regret of [3] and [2], but also provides intermediate rates. For each of these special cases,
we provide a sequence of {?t } which achieves the desired rates. Since Theorem 1.1 guarantees that
Algorithm 2 is competitive with the best choice of the parameters, we conclude that Algorithm 2
achieves the same rates.
Corollary 3.1. Suppose Gt ? G for all 1 ??t ? T . Then for any sequence of convex functions
{ft }, the bound on regret of Algorithm 2 is O( T ).
1
2
?
T and ?t = 0 for 1 < t ? T . By Eq. 2,
!
T
T
X
X
(Gt + ?t D)2
3
G2t
2
D ?1:T +
? D2 ?1:T +
H 1:t + ?1:t
2
H 1:t + ?1:t
t=1
t=1
T
X
?
3 ?
G2
3 2
? =
? D2 T +
D + G2
T.
2
2
T
t=1
Proof. Let ?1 =
1
Lemma 3.1 effectively describes an algorithm for an online problem with competitive ratio of 2. In the full
version of this paper we give a lower bound strictly larger than one on the competitive ratio achievable by any
online algorithm for this problem.
5
?
Hence, the regret of Algorithm 2 can never increase faster than T . We now consider the assumptions of [3].
Corollary 3.2. Suppose Ht ? H > 0 and G2t < G for all 1 ? t ? T . Then the bound on regret of
Algorithm 2 is O(log T ).
Proof. Set ?t = 0 for all t. It holds that RT ?
1
2
G2t
t=1 H 1:t
PT
?
1
2
PT
G
t=1 tH
?
G
2H (log T
+ 1).
The above proof also recovers the result of Theorem 2.1. The following Corollary shows a spectrum
of rates under assumptions on the curvature of functions.
Corollary 3.3. Suppose Ht = t?? and Gt ? G for all 1 ? t ? T .
1. If ? = 0, then RT = O(log T ).
?
2. If ? > 1/2, then RT = O( T ).
3. If 0 < ? ? 1/2, then RT = O(T ? ).
Proof. The first two cases follow immediately from Corollaries 3.1 and 3.2. For the third case, let
R t?1
Pt
?1 = T ? and ?t = 0 for 1 < t ? T . Note that s=1 Hs ? x=0 (x + 1)?? dx = (1 ? ?)?1 t1?? ?
(1 ? ?)?1 . Hence,
!
T
T
2
X
X
(G
+
?
D)
G2t
3
1
t
t
D2 ?1:T +
? D2 ?1:T +
2
H 1:t + ?1:t
2
H 1:t + ?1:t
t=1
t=1
? 2D2 T ? + G2 (1 ? ?)
T
X
t=1
1
t1?? ? 1
1
? 2D2 T ? + 2G2 T ? + O(1) = O(T ? ).
?
4
Generalization to different norms
The original online gradient descent (OGD) algorithm as analyzed by Zinkevich [2] used the Euclidean distance of the current point from the optimum as a potential function. The logarithmic
regret bounds of [3] for strongly convex functions were also stated for the Euclidean norm, and
such was the presentation above. However, as observed by Shalev-Shwartz and Singer in [5], the
proof technique of [3] extends to arbitrary norms. As such, our results above for adaptive regularization carry on to the general setting, as we state below . Our notation follows that of Gentile and
Warmuth [6].
Definition 4.1. A function g over a convex set K is called H-strongly convex with respect to a
convex function h if
H
Bh (x, y).
2
Here Bh (x, y) is the Bregman divergence with respect to the function h, defined as
?x, y ? K . g(x) ? g(y) + ?g(y)> (x ? y) +
Bh (x, y) = h(x) ? h(y) ? ?h(y)> (x ? y).
This notion of strong convexity generalizes the Euclidean notion: the function g(x) = kxk22 is
strongly convex with respect to h(x) = kxk22 (in this case Bh (x, y) = kx ? yk22 ). More generally, the Bregman divergence can be thought of as a squared norm, not necessarily Euclidean,
i.e., Bh (x, y) = kx ? yk2 . Henceforth we also refer to the dual norm of a given norm, defined
by kyk? = supkxk?1 {y > x}. For the case of `p norms, we have kyk? = kykq where q satisfies
1
1
older?s inequality y > x ? kyk? kxk ? 21 kyk2? + 12 kxk2 (this holds for norms
p + q = 1, and by H?
other than `p as well).
6
For simplicity, the reader may think of the functions g, h as convex and differentiable2 . The following algorithm is a generalization of the OGD algorithm to general strongly convex functions (see
the derivation in [6]). In this extended abstract we state the update rule implicitly, leaving the issues
of efficient computation for the full version (these issues are orthogonal to our discussion, and were
addressed in [6] for a variety of functions h).
Algorithm 3 General-Norm Online Gradient Descent
1: Input: convex function h
2: Initialize x1 arbitrarily.
3: for t = 1 to T do
4:
Predict xt , observe ft .
5:
Compute ?t+1 and let yt+1 be such that ?h(yt+1 ) = ?h(xt ) ? 2?t+1 ?ft (xt ).
6:
Let xt+1 = arg minx?K Bh (x, yt+1 ) be the projection of yt+1 onto K.
7: end for
The methods of the previous sections can now be used to derive similar, dynamically optimal, bounds
on the regret. As a first step, let us generalize the bound of [3], as well as Theorem 2.1, to general
norms:
Theorem 4.1. Suppose that, for each t, ft is a Ht -strongly convex function with respect to h, and let
h be such that Bh (x, y) ? kx ? yk2 for some norm k ? k. Let k?ft (xt )k? ? Gt for all t. Applying
the General-Norm Online Gradient Algorithm with ?t+1 = H11:t , we have
T
RT ?
1 X G2t
.
2 t=1 H 1:t
Proof. The proof follows [3], with the Bregman divergence replacing the Euclidean distance as a
potential function. By assumption on the functions ft , for any x? ? K,
Ht
Bh (x? , xt ).
ft (xt ) ? ft (x? ) ? ?ft (xt )> (xt ? x? ) ?
2
By a well-known property of Bregman divergences (see [6]), it holds that for any vectors x, y, z,
(x ? y)> (?h(z) ? ?h(y)) = Bh (x, y) ? Bh (x, z) + Bh (y, z).
Combining both observations,
2(ft (xt ) ? ft (x? )) ? 2?ft (xt )> (xt ? x? ) ? Ht Bh (x? , xt )
1
=
(?h(yt+1 ) ? ?h(xt ))> (x? ? xt ) ? Ht Bh (x? , xt )
?t+1
1
=
[Bh (x? , xt ) ? Bh (x? , yt+1 ) + Bh (xt , yt+1 )] ? Ht Bh (x? , xt )
?t+1
1
?
[Bh (x? , xt ) ? Bh (x? , xt+1 ) + Bh (xt , yt+1 )] ? Ht Bh (x? , xt ),
?t+1
where the last inequality follows from the Pythagorean Theorem for Bregman divergences [6], as
xt+1 is the projection w.r.t the Bregman divergence of yt+1 and x? ? K is in the convex set.
Summing over all iterations and recalling that ?t+1 = H11:t ,
X
T
T
X
1
1
1
1
2RT ?
Bh (x? , xt )
?
? Ht + Bh (x? , x1 )
? H1 +
Bh (xt , yt+1 )
?
?
?
?
t+1
t
2
t+1
t=2
t=1
=
T
X
1
Bh (xt , yt+1 ).
?
t=1 t+1
(4)
2
Since the set of points of nondifferentiability of convex functions has measure zero, convexity is the only
property that we require. Indeed, for nondifferentiable functions, the algorithm would choose a point x
?t , which
is xt with the addition of a small random perturbation. With probability one, the functions would be smooth
at the perturbed point, and the perturbation could be made arbitrarily small so that the regret rate would not be
affected.
7
We proceed to bound Bh (xt , yt+1 ). By definition of Bregman divergence, and the dual norm inequality stated before,
Bh (xt , yt+1 ) + Bh (yt+1 , xt ) = (?h(xt ) ? ?h(yt+1 ))> (xt ? yt+1 )
= 2?t+1 ?ft (xt )> (xt ? yt+1 )
2
? ?t+1
k?t k2? + kxt ? yt+1 k2 .
Thus, by our assumption Bh (x, y) ? kx ? yk2 , we have
2
2
Bh (xt , yt+1 ) ? ?t+1
k?t k2? + kxt ? yt+1 k2 ? Bh (yt+1 , xt ) ? ?t+1
k?t k2? .
Plugging back into Eq. (4) we get
T
T
1X
1 X G2t
?t+1 G2t =
.
RT ?
2 t=1
2 t=1 H 1:t
The generalization of our technique is now straightforward. Let A2 = supx?K g(x) and 2B =
supx?K k?g(x)k? . The following algorithm is an analogue of Algorithm 2 and Theorem 4.2 is the
analogue of Theorem 1.1 for general norms.
Algorithm 4 Adaptive General-Norm Online Gradient Descent
1: Initialize x1 arbitrarily. Let g(x) be 1-strongly convex with respect to the convex function h.
2: for t = 1 to T do
3:
Predict xt , observe
ft
q
2
1
(H 1:t + ?1:t?1 ) + 8G2t /(A2 + 2B 2 ) ? (H 1:t + ?1:t?1 ) .
4:
Compute ?t = 2
?1
5:
Compute ?t+1 = (H 1:t + ?1:t ) .
6:
Let yt+1 be such that ?h(yt+1 ) = ?h(xt ) ? 2?t+1 (?ft (xt ) + ?2t ?g(xt ))).
7:
Let xt+1 = arg minx?K Bh (x, yt+1 ) be the projection of yt+1 onto K.
8: end for
Theorem 4.2. Suppose that each ft is a Ht -strongly convex function with respect to h, and let g be
a 1-strongly convex with respect h. Let h be such that Bh (x, y) ? kx ? yk2 for some norm k ? k. Let
k?ft (xt )k? ? Gt . The regret of Algorithm 4 is bounded by
!
T
?
2
X
(G
+
?
B)
t
t
RT ? ? inf ? (A2 + 2B 2 )??1:T +
.
?1 ,...,?T
H
+
??1:t
1:t
t=1
If the norm in the above theorem is the Euclidean norm and g(x) = kxk2 , we find that D =
supx?K kxk = A = B and recover the results of Theorem 1.1.
References
[1] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
[2] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML,
pages 928?936, 2003.
[3] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex
optimization. In COLT, pages 499?513, 2006.
[4] Shai Shalev-Shwartz and Yoram Singer. Convex repeated games and Fenchel duality. In B. Sch?olkopf,
J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press,
Cambridge, MA, 2007.
[5] Shai Shalev-Shwartz and Yoram Singer. Logarithmic regret algorithms for strongly convex repeated games.
In Technical Report 2007-42. The Hebrew University, 2007.
[6] C. Gentile and M. K. Warmuth. Proving relative loss bounds for on-line learning algorithms using Bregman
divergences. In COLT. Tutorial, 2000.
8
| 3319 |@word h:5 version:3 achievable:1 norm:22 d2:14 carry:1 current:1 com:1 yet:1 dx:1 must:1 remove:1 update:3 implying:1 warmuth:2 kyk:3 provides:2 shorthand:1 consists:1 prove:1 introduce:1 x0:2 indeed:2 considering:1 increasing:1 becomes:1 provided:3 bounded:3 linearity:1 moreover:1 notation:1 kind:2 minimizes:1 guarantee:2 berkeley:6 concave:1 growth:1 k2:9 platt:1 positive:1 t1:2 before:1 supkxk:1 lugosi:1 might:2 dynamically:1 g2t:17 investment:1 regret:33 thought:1 projection:4 road:1 get:1 cannot:1 onto:2 cal:1 bh:33 applying:2 influence:2 zinkevich:5 center:1 yt:25 straightforward:2 kale:1 convex:37 simplicity:2 immediately:1 rule:1 proving:1 notion:2 pt:9 suppose:10 play:1 programming:1 ogd:6 origin:3 logarithmically:1 observed:4 ft:41 decrease:1 environment:1 convexity:6 depend:2 division:3 regularizer:1 derivation:1 choosing:1 shalev:3 elad:2 larger:3 otherwise:2 satyen:1 statistic:1 g1:2 think:1 final:1 online:20 sequence:10 kxt:5 combining:1 achieve:1 olkopf:1 optimum:2 produce:1 adam:1 depending:1 derive:1 eq:5 strong:3 c:2 implies:1 direction:1 centered:2 require:1 f1:1 generalization:3 h11:2 preliminary:1 extension:3 strictly:2 hold:4 around:2 considered:1 predict:4 achieves:3 smallest:1 a2:4 largest:1 hoffman:1 mit:1 aim:1 kalai:1 corollary:6 sense:1 abor:1 arg:3 dual:2 issue:2 colt:2 almaden:1 priori:1 special:3 initialize:4 uc:2 never:1 icml:1 nearly:1 report:1 divergence:8 individual:1 recalling:1 possibility:2 analyzed:2 bregman:8 partial:1 orthogonal:1 euclidean:8 desired:1 theoretical:1 instance:1 increased:1 fenchel:1 assignment:1 subset:2 uniform:2 too:1 optimally:1 supx:4 perturbed:1 chooses:2 adaptively:1 squared:2 central:1 satisfied:1 unavoidable:1 cesa:1 choose:2 henceforth:1 expert:1 derivative:2 potential:2 harry:1 b2:1 coefficient:5 performed:1 h1:6 multiplicative:1 root:1 hazan:6 competitive:3 recover:1 shai:2 generalize:2 kykq:1 footnote:1 definition:3 infinitesimal:1 involved:1 proof:12 recovers:3 proved:2 ask:1 knowledge:3 back:1 follow:1 improved:1 shrink:1 strongly:11 generality:1 furthermore:3 hand:2 replacing:1 grows:2 verify:2 inductive:1 regularization:13 hence:4 round:5 game:6 numerator:1 kyk2:1 generalized:1 recently:2 refer:1 cambridge:2 similarly:1 had:1 yk2:4 gt:14 add:1 base:1 curvature:4 showed:2 inf:5 mint:1 inequality:6 arbitrarily:5 gentile:2 somewhat:1 determine:1 full:2 flatness:1 smooth:1 technical:1 faster:1 adapt:1 long:1 plugging:1 prediction:3 iteration:1 agarwal:1 c1:2 addition:1 addressed:1 grow:1 leaving:1 suffered:2 appropriately:1 sch:1 eliminates:1 rest:1 exhibited:2 ascent:1 near:2 presence:1 yk22:1 intermediate:3 easy:2 variety:1 t0:2 bartlett:2 penalty:1 peter:1 interpolates:2 proceed:1 action:5 remark:1 generally:1 diameter:2 tutorial:1 dropping:1 affected:1 key:2 nevertheless:1 achieving:1 ht:32 sum:5 jose:1 extends:1 throughout:1 reader:2 bound:30 ct:8 quadratic:2 oracle:2 precisely:1 lopt:1 optimality:1 min:2 performing:1 martin:1 department:1 smaller:2 describes:1 equation:1 turn:2 needed:1 know:2 singer:3 end:5 generalizes:1 observe:5 away:1 appropriate:1 original:2 denotes:1 ensure:1 yoram:2 restrictive:1 prof:1 amit:1 question:1 already:1 occurs:1 rt:15 dependence:1 gradient:16 minx:3 distance:2 nondifferentiable:1 trivial:1 induction:2 loosening:1 remind:1 ratio:2 balance:1 minimizing:1 hebrew:1 statement:1 expense:1 stated:3 negative:5 unknown:1 bianchi:1 upper:1 observation:1 descent:13 extended:2 rn:2 perturbation:2 arbitrary:4 required:1 able:1 adversary:2 below:1 including:1 analogue:2 natural:1 older:1 kxk22:2 prior:1 nice:1 nicol:1 relative:1 loss:12 editor:1 balancing:2 ibm:2 maxt:1 course:1 surprisingly:1 last:1 offline:1 side:1 bias:1 author:1 made:2 adaptive:10 san:1 implicitly:1 summing:1 assumed:1 conclude:1 shwartz:3 spectrum:1 ca:3 necessarily:1 bounding:1 nothing:1 repeated:3 x1:6 advice:1 shrinking:1 kxk2:4 third:1 theorem:22 xt:63 specific:1 rakhlin:2 virtue:1 essential:1 sequential:2 effectively:1 mirror:1 kx:8 easier:1 logarithmic:4 prevents:1 kxk:5 g2:4 corresponds:1 satisfies:2 ma:1 viewed:2 formulated:1 presentation:1 towards:2 except:1 uniformly:5 lemma:5 total:3 called:1 duality:1 player:3 alexander:1 pythagorean:1 |
2,558 | 332 | Asymptotic slowing down of the
nearest- neighbor classifier
Robert R. Snapp
CS lEE Department
University of Vermont
Burlington, VT 05405
Demetri Psaltis
Electrical Engineering
Caltech 116-81
Pasadena, CA 91125
Santosh S. Venkatesh
Electrical Engineering
University of Pennsylvania
Philadelphia, PA 19104
Abstract
If patterns are drawn from an n-dimensional feature space according to a
probability distribution that obeys a weak smoothness criterion, we show
that the probability that a random input pattern is misclassified by a
nearest-neighbor classifier using M random reference patterns asymptotically satisfies
a
PM(error) "" Poo(error) + M2/n'
for sufficiently large values of M. Here, Poo(error) denotes the probability
of error in the infinite sample limit, and is at most twice the error of a
Bayes classifier. Although the value of the coefficient a depends upon the
underlying probability distributions, the exponent of M is largely distribution free. We thus obtain a concise relation between a classifier's ability
to generalize from a finite reference sample and the dimensionality of the
feature space, as well as an analytic validation of Bellman's well known
"curse of dimensionality."
1
INTRODUCTION
One of the primary tasks assigned to neural networks is pattern classification. Common applications include recognition problems dealing with speech, handwritten
characters, DNA sequences, military targets, and (in this conference) sexual identity. Two fundamental concepts associated with pattern classification are generalization (how well does a classifier respond to input data it has never encountered
before?) and scalability (how are a classifier's processing and training requirements
affected by increasing the number of features that describe the input patterns?).
932
Asymptotic Slowing Down of the Nearest-Neighbor Classifier
Despite recent progress, our present understanding of these concepts in the context of neural networks is obstructed by complexities in the functional form of the
network and in the classification problems themselves.
In this correspondence we will present analytic results on these issues for the nearestneighbor classifier. Noted for its algorithmic simplicity and nearly optimal performance in the infinite sample limit, this pattern classifier plays a central role in the
field of pattern recognition. Furthermore, because it uses proximity in feature space
as a measure of class similarity, its performance on a given classification problem
should yield qualitative cues to the performance of a. neural network. Indeed, a
nearest-neighbor classifier can be readily implemented as a "winner-take-all" neural
network.
2
THE TASK OF PATTERN CLASSIFICATION
We begin with a formulation of the two-class problem (Duda and Hart, 1973):
Let the labels WI and W2 denote two states of nature, or pattern classes.
A pattern belonging to one of these two classes is selected, and a vector of
n features, x, that describe the selected pattern is presented to a pattern
classifier. The classifier then attempts to guess the selected pattern's class
by assigning x to either WI or W2.
As an example, the two class labels might represent the states benign and malignant
as they pertain to the diagnosis of cancer tumors; the feature vector could then be
a 1024 x 1024 pixel, real-valued representation of an electron-microscope image. A
pattern classifier can thus be viewed as a mapping from an n-dimensional feature
space to the discrete set {WI,W2}, and can be specified by demarcating the regions
in the n-dimensional feature space that correspond to WI and W2. We define the
decision region ni as the set of feature vectors that the pattern classifier assigns to
WI, with a.n analogous definition for n 2 . A useful figure of merit is the probability
that the feature vector of a randomly selected pattern is assigned to the correct
class.
2.1
THE BAYES CLASSIFIER
If sufficient information is available, it is possible to construct an optimal pattern
classifier. Let P(wt) and P(W2) denote the prior probabilities of the two states of
nature. (For our cancer diagnosis problem, the prior probabilities can be estimated
by the relative frequency of each type of tumor in a large statistical sample.) Further, let p(x I wI) and p(x I W2) denote the class-conditional probability densities of
the feature vector for the two class problem. The total probability density is now
defined by p(x) = p(x I WI)P(Wt) + p(x I W2)P(W2), and gives the unconditional
distribution of the feature vector. Where p(x) ::J:. 0 we can now use Bayes' rule to
compute the posterior probabilities:
P(
WI
I X ) --
p(x I wt)P(wt)
p(x)
and
The Bayes classifier assigns an unclassified feature vector x to the class label having
933
934
Snapp, ~altis, and Venkatesh
the greatest posterior probability. (If the posterior probabilities happen to be equal,
then the class assignment is arbitrary.) With'R,l and'R,2 denoting the two decision
regions induced by this strategy, the probability of error of the Bayes classifier, PB,
is just the probability that x is drawn from class Wl but lies in the Bayes decision
region 'R,2, or conversely, that x is drawn from class W2 but lies in the Bayes decision
region'R,l:
The reader may verify that the Bayes classifier minimizes the probability of error.
Unfortunately, it is usually impossible to obtain expressions for the class-conditional
densities and prior probabilities in practice. Typically, the available information
resides in a set of correctly labeled patterns, which we collectively term a training
or reference sample. Over the last few decades, numerous pattern classification
strategies have been developed that attempt to learn the structure of a classification
problem from a finite training sample. (The backpropagation algorithm is a recent
example.) The underlying hope is that the classifier's performance can be made
acceptable with a sufficiently large reference sample. In order to understand how
large a sample may be needed, we turn to what is perhaps the simplest learning
algorithm of this class.
3
THE NEAREST-NEIGHBOR CLASSIFIER
Let XM = ((xU), 0(1?), (x C2 ), O(2?), ... , (xCM ) , OCM?)} denote a finite reference sample of M feature vectors, xCi) E R n , with corresponding known class assignments,
OCi) E {Wl, W2}. The nearest-neighbor
rule assigns each feature vector x to
o
class Wl or W2 as a function of the reference M -sample as follows:
? Identify (x', 0') E XM such that
Ilx-x'lI ~ Ilx-xCi)11 for i ranging
from 1 through M;
o
V
&~
"
yi\:::::: "??
""
...
? Assign x to class (J'.
Here, IIx-YIi = E 7=l(Xj - Yj)2 denotes the Euclidean metric in Rn.lThe
nearest-neighbor rule hence classifies
each feature vector x according to the
label, (J', of the closest point, x', in
the reference sample. As an example,
we sketch the nearest-neighbor decision regions for a two-dimensional classification problem in Fig. 1.
.~:
"
' :
o
o
Figure 1: The decision regions induced
by a nearest-neighbor classifier with a
seven-element reference set in the plane.
lOther metrics, such as the more general Minkowski-r metric, are also possible.
Asymptotic Slowing Down of the Nearest-Neighbor Classifier
It is interesting to consider how the performance of this classifier compares with that
of a Bayes classifier. To facilitate this analysis, we assume that the reference patterns
are selected from the total probability density p(x) in a statistically independent
manner (i.e., the choice of Xj does not in any way bias the selection of X(j+1) and
8(j+1?. Furthermore, let PM(error) denote the probability of error of a nearestneighbor classifier working with the reference sample X M, and let P 00 (error) denote
this probability in the infinite sample limit (M -+ 00). We will also let S denote
the volume in feature space over which p(x) is nonzero. The following well known
theorem shows that the nearest-neighbor classifier, with an infinite reference sample,
is nearly optimal (Cover and Hart, 1967).2
Theorem 1 For the two-class problem in the infinite sample limit, the probability
of error of a nearest-neighbor classifier tends toward the value,
Poo(error)
=2
L
P(W1 I X)P(W2 I x)p(x) c?x,
which is furthermore bounded by the two inequalities,
PB
< Poo(error) :s;
2PB(I- PB),
where PB is the probability of error of a Bayes classifier.
This encouraging result is not so surprising if one considers that, with probability
one, about every feature vector x is centered a ball of radius (: that contains an
infinite number of reference feature vectors for every (: > O. The annoying factor of
two accounts for the event that the nearest neighbor to x belongs to the class with
smaller posterior probability.
3.1
THE ASYMPTOTIC CONVERGENCE RATE
In order to satisfactorily address the issues of generalization and scalability for the
nearest-neighbor classifier, we need to consider the rate at which the performance of
the classifier approaches its infinite sample limit. The following theorem applicable
to nearest-neighbor classification in one-dimensional feature spaces was shown by
Cover (1968).
Theorem 2 Let p(x I wI) and p(x I W2) have uniformly bounded third derivatives
and let p(x) be bounded away from zero on S. Then for sufficiently large M,
PM(error)
= Poo(error) + 0 (~2) .
Note that this result is also very encouraging in that an order of magnitude increase
in the sample size, decreases the error rate by two orders of magnitude.
The following theorem is our main result which extends Cover's theorem to ndimensional feature spaces:
20 riginally, this theorem was stated for multiclass decision problems; it is here presented
for the two class problem only for simplicity.
935
936
Snapp, &altis, and Venkatesh
Theorem 3 Let p(x I wt), p(x I W2), and p(x) satisfy the same conditions as in
Theorem 2. Then, there exists a scalar a (depending on n) such that
a
PM(error) Poo(error) + M2/n'
I'V
where the right-hand side describes the first two terms of an asymptotic expansion
in reciprocal powers of M2/n. Explicitly,
a
= r (1 +~) (r (~+ 1))2/n
mr
t isf
~'"Yii(X)) (p(x?I-2/n
(f3i(X!P)(x) +
i=l
dnx.
2
p x
where,
Pi(X)
apex)
--a;:x)
f)P(WI I x)P(
P( WI
I X ) f)P(w21
~
P( WI
I x)
I X ) f)2P(W2
f) 2
+
Xi
UXi
+
~
UXi
W2
IX)
f)2P(WI I x)P(
f) 2
W2
I )
X.
Xi
=
For n
1 this result agrees with Cover's theorem. With increasing n, however,
the convergence rate significantly slows down. Note that the constant a depends on
the way in which the class-conditional densities overlap. If a is bounded away from
zero, then for sufficiently small 6 > 0, PM(error) - Poo(error) < 6 is satisfied only
if M > (a/ 6)n/2 so that the sample size required to achieve a given performance
criterion is exponential in the dimensionality of the feature space. The above provides a sufficient condition for Bellman's well known "curse of dimensionality" in
this context.
It is also interesting to note that one can easily construct classification problems for
which a vanishes. (Consider, for example, p(x I wI) p(x I W2) for all x.) In these
=
cases the higher-order terms in the asymptotic expansion are important.
4
A NUMERICAL EXPERIMENT
A conspicuous weakness in the above theorem is the requirement that p(x) be
bounded away from zero over S. In exchange for a uniformly convergent asymptotic
expansion, we have omitted many important probability distributions, including
normal distributions. Therefore we numerically estimate the asymptotic behavior
of PM (error) for a problem consisting of two normally distributed classes in R n :
I wd
(27r0'~)n/2 exp [-2;2
((Xl - J-L)2
p(x I W2)
(27r0'~)n/2 exp [- 2;2
((Xl
p(x
Assuming that P(wI) = P(W2)
+ L7=2 xI)],
+ J-L)2 + LJ=2 xJ)] .
= 1/2, we find
~e-J1~/2q~ fOO e-:t:~/2q~ sech (J-LX) dx.
Poo(error) =
0'
27r
io
0'2
Asymptotic Slowing Down of the Nearest-Neighbor Classifier
-1.0
-----
.........
g
a)
'-'
~8
-2.0
.........
M
g
a)
'-'
~':E.
+
"-'"
~
~
-
+
-3.0
o
n=l
+ n=2
6
n=3
0
n=4
<> n=5
0
-4.0
0.0
0.5
0
0
0
1.5
1.0
loglO(M)
2.0
2.5
Figure 2: Numerical validation of the nearest-neighbor scaling hypothesis for two
normally distributed classes in R n .
= =
For J1.
(1
1, Poo(error) is numerically found to be 0.22480, which is consistent
with the Bayes probability of error, PB = (1/2)erfc(I/V2) = 0.15865. (Note that
the expression for a given in Theorem 3 is undefined for these distributions.) For
n ranging from 1 to 5, and M ranging from 1 to 200, three estimates of PM (error)
were obtained, each as the fraction of "failures" in 160,000 or more Bernoulli trials.
Each trial consists of constructing a pseudo-random sample of M reference patterns,
followed by a single attempt to correctly classify a random input pattern. These
estimates of PM are represented in Figure 2 by circular markers for n = 1, crosses
for n 2, etc. The lines in Figure 2 depict the power law
=
PM(error) = Poo(error) + bM- 2 / n ,
where, for each n, b is chosen to obtain an appealing fit. The agreement between
these lines and data points suggests that the asymptotic scaling hypothesis of Theorem 3 can be extended to a wider class of distributions.
937
938
Snapp, Psaltis, and Venkatesh
5
DISCUSSION
The preceding analysis indicates that the convergence rate of the nearest-neighbor
classifier slows down dramatically as the dimensionality of the feature space increases. This rate reduction suggests that proximity in feature space is a less effective measure of class identity in higher dimensional feature spaces. It is also clear
that some degree of smoothness in the class-conditional densities is necessary, as
well as sufficient, for the asymptotic behavior described by our analysis to occur:
in the absence of smoothness conditions, one can construct classification problems
for which the nearest-neighbor convergence rate is arbitrarily slow, even in one dimension (Cover, 1968). Fortunately, the most pressing classification problems are
typically smooth in that they are constrained by regularities implicit in the laws of
nature (Marr, 1982). With additional prior information, the convergence rate may
be enhanced by selecting a fewer number of descriptive features.
Because of their smooth input-output response, neural networks appear to use proximity in feature space as a basis for classification. One might, therefore, expect the
required sample size to scale exponentially with the dimensionality of the feature
space. Recent results from computational learning theory, however, imply that with
a sample size proportional to the capacity-a combinatorial quantity which is characteristic of the network architecture and which typically grows polynomially in the
dimensionality of the feature space-one can in principle identify network parameters (weights) which give (close to) the smallest classification error for the given
architecture (Baum and Haussler, 1989). There are two caveats, however. First,
the information-theoretic sample complexities predicted by learning theory give no
clue as to whether, given a sample of the requisite size, there exist any algorithms
that can specify the appropriate parameters in a reasonable time frame. Second,
and more fundamental, one cannot in general determine whether a particular architecture is intrinsically well suited to a given classification problem. The best
performance achievable may be substantially poorer than that of a Bayes classifier.
Thus, without sufficient prior information, one must search through the space of
all possible network architectures for one that does fit the problem well. This situation now effectively resembles a non-parametric classifier and the analytic results
for the sample complexities of the nearest-neighbor classifier should provide at least
qualitative indications of the corresponding case for neural networks.
References
Baum, E. B. and Haussler, D. (1989), "What size net gives valid generalization,"
Neural Computation, 1, pp. 151-160.
Cover, T. M. (1968), "Rates of convergence of nearest neighbor decision procedures," Proc. First Annual Hawaii Conference on Systems Theory, pp. 413-415.
Cover, T. M. and P. E. Hart (1967), "Nearest neighbor pattern classification," IEEE
Trans. Info. Theory, vol. IT-13, pp. 21-27.
Duda, R. O. and P. E. Hart (1973), Pattern Classification and Scene Analysis. New
York: John Wiley & Sons.
Marr, D. (1982), Vision, San Francisco: W. H. Freeman.
| 332 |@word trial:2 achievable:1 duda:2 annoying:1 concise:1 reduction:1 contains:1 selecting:1 denoting:1 wd:1 surprising:1 assigning:1 dx:1 must:1 readily:1 john:1 numerical:2 happen:1 j1:2 benign:1 analytic:3 depict:1 cue:1 selected:5 guess:1 fewer:1 slowing:4 plane:1 reciprocal:1 caveat:1 provides:1 lx:1 c2:1 qualitative:2 consists:1 manner:1 indeed:1 behavior:2 themselves:1 bellman:2 freeman:1 encouraging:2 curse:2 increasing:2 begin:1 classifies:1 underlying:2 bounded:5 what:2 minimizes:1 substantially:1 developed:1 pseudo:1 every:2 classifier:36 demetri:1 normally:2 appear:1 before:1 engineering:2 tends:1 limit:5 io:1 despite:1 might:2 twice:1 resembles:1 nearestneighbor:2 conversely:1 suggests:2 statistically:1 obeys:1 satisfactorily:1 yj:1 practice:1 backpropagation:1 procedure:1 significantly:1 cannot:1 close:1 pertain:1 selection:1 context:2 impossible:1 xci:2 baum:2 poo:10 simplicity:2 assigns:3 m2:3 rule:3 haussler:2 marr:2 analogous:1 target:1 play:1 enhanced:1 us:1 hypothesis:2 agreement:1 pa:1 element:1 recognition:2 vermont:1 labeled:1 role:1 electrical:2 region:7 decrease:1 vanishes:1 complexity:3 upon:1 basis:1 easily:1 represented:1 demarcating:1 describe:2 effective:1 dnx:1 valued:1 ability:1 sequence:1 pressing:1 descriptive:1 indication:1 net:1 yii:2 achieve:1 scalability:2 convergence:6 regularity:1 requirement:2 wider:1 depending:1 nearest:22 progress:1 implemented:1 c:1 predicted:1 radius:1 correct:1 centered:1 exchange:1 assign:1 generalization:3 proximity:3 sufficiently:4 normal:1 exp:2 algorithmic:1 mapping:1 electron:1 smallest:1 omitted:1 proc:1 applicable:1 psaltis:2 label:4 combinatorial:1 agrees:1 wl:3 hope:1 loglo:1 bernoulli:1 indicates:1 typically:3 lj:1 pasadena:1 relation:1 misclassified:1 pixel:1 issue:2 classification:17 l7:1 exponent:1 constrained:1 santosh:1 field:1 never:1 construct:3 having:1 equal:1 nearly:2 few:1 randomly:1 consisting:1 attempt:3 circular:1 weakness:1 unconditional:1 undefined:1 poorer:1 necessary:1 euclidean:1 military:1 classify:1 cover:7 assignment:2 density:6 fundamental:2 lee:1 w1:1 central:1 satisfied:1 hawaii:1 derivative:1 li:1 account:1 coefficient:1 satisfy:1 explicitly:1 depends:2 bayes:12 ni:1 largely:1 characteristic:1 yield:1 correspond:1 identify:2 generalize:1 weak:1 handwritten:1 w21:1 definition:1 failure:1 frequency:1 pp:3 associated:1 intrinsically:1 dimensionality:7 higher:2 response:1 specify:1 formulation:1 obstructed:1 furthermore:3 just:1 implicit:1 sketch:1 working:1 hand:1 marker:1 perhaps:1 grows:1 facilitate:1 concept:2 verify:1 hence:1 assigned:2 nonzero:1 noted:1 criterion:2 theoretic:1 ranging:3 image:1 common:1 functional:1 winner:1 exponentially:1 volume:1 numerically:2 isf:1 smoothness:3 pm:9 ocm:1 apex:1 similarity:1 sech:1 etc:1 posterior:4 closest:1 recent:3 belongs:1 inequality:1 arbitrarily:1 vt:1 yi:1 caltech:1 fortunately:1 additional:1 preceding:1 mr:1 r0:2 determine:1 smooth:2 cross:1 hart:4 vision:1 metric:3 represent:1 microscope:1 w2:20 induced:2 xj:3 fit:2 pennsylvania:1 architecture:4 f3i:1 multiclass:1 whether:2 expression:2 speech:1 york:1 dramatically:1 useful:1 clear:1 dna:1 simplest:1 exist:1 estimated:1 correctly:2 diagnosis:2 discrete:1 vol:1 affected:1 pb:6 drawn:3 asymptotically:1 fraction:1 unclassified:1 respond:1 extends:1 reader:1 reasonable:1 decision:8 acceptable:1 scaling:2 followed:1 convergent:1 correspondence:1 encountered:1 annual:1 occur:1 scene:1 minkowski:1 department:1 according:2 ball:1 belonging:1 smaller:1 describes:1 son:1 character:1 conspicuous:1 wi:15 appealing:1 turn:1 malignant:1 needed:1 merit:1 available:2 away:3 v2:1 appropriate:1 denotes:2 include:1 iix:1 erfc:1 uxi:2 quantity:1 strategy:2 primary:1 parametric:1 capacity:1 seven:1 considers:1 lthe:1 toward:1 assuming:1 unfortunately:1 robert:1 info:1 stated:1 slows:2 finite:3 situation:1 extended:1 frame:1 rn:1 lother:1 arbitrary:1 venkatesh:4 required:2 specified:1 trans:1 address:1 usually:1 pattern:25 xm:2 including:1 greatest:1 event:1 power:2 overlap:1 ndimensional:1 imply:1 numerous:1 philadelphia:1 prior:5 understanding:1 asymptotic:11 relative:1 law:2 expect:1 interesting:2 proportional:1 oci:1 validation:2 degree:1 sufficient:4 consistent:1 principle:1 pi:1 cancer:2 last:1 free:1 bias:1 side:1 understand:1 neighbor:22 distributed:2 dimension:1 valid:1 resides:1 made:1 clue:1 san:1 bm:1 polynomially:1 dealing:1 francisco:1 xi:3 search:1 decade:1 learn:1 nature:3 ca:1 expansion:3 constructing:1 main:1 snapp:4 xu:1 fig:1 slow:1 wiley:1 foo:1 exponential:1 xl:2 lie:2 third:1 burlington:1 ix:1 down:6 theorem:13 exists:1 effectively:1 magnitude:2 suited:1 ilx:2 xcm:1 scalar:1 collectively:1 satisfies:1 conditional:4 identity:2 viewed:1 absence:1 infinite:7 sexual:1 uniformly:2 wt:5 tumor:2 total:2 requisite:1 |
2,559 | 3,320 | Locality and low-dimensions in the prediction of
natural experience from fMRI
Franc?ois G. Meyer
Center for the Study of Brain, Mind and Behavior,
Program in Applied and Computational Mathematics
Princeton University
fmeyer@colorado.edu
Greg J. Stephens
Center for the Study of Brain, Mind and Behavior,
Department of Physics
Princeton University
gstephen@princeton.edu
Both authors contributed equally to this work.
Abstract
Functional Magnetic Resonance Imaging (fMRI) provides dynamical access into
the complex functioning of the human brain, detailing the hemodynamic activity of thousands of voxels during hundreds of sequential time points. One approach towards illuminating the connection between fMRI and cognitive function
is through decoding; how do the time series of voxel activities combine to provide
information about internal and external experience? Here we seek models of fMRI
decoding which are balanced between the simplicity of their interpretation and the
effectiveness of their prediction. We use signals from a subject immersed in virtual reality to compare global and local methods of prediction applying both linear
and nonlinear techniques of dimensionality reduction. We find that the prediction
of complex stimuli is remarkably low-dimensional, saturating with less than 100
features. In particular, we build effective models based on the decorrelated components of cognitive activity in the classically-defined Brodmann areas. For some
of the stimuli, the top predictive areas were surprisingly transparent, including
Wernicke?s area for verbal instructions, visual cortex for facial and body features,
and visual-temporal regions for velocity. Direct sensory experience resulted in
the most robust predictions, with the highest correlation (c ? 0.8) between the
predicted and experienced time series of verbal instructions. Techniques based on
non-linear dimensionality reduction (Laplacian eigenmaps) performed similarly.
The interpretability and relative simplicity of our approach provides a conceptual
basis upon which to build more sophisticated techniques for fMRI decoding and
offers a window into cognitive function during dynamic, natural experience.
1
Introduction
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive imaging technique that can quantify changes in cerebral venous oxygen concentration. Changes in the fMRI signal that occur during
brain activation are very small (1-5%) and are often contaminated by noise (created by the imaging
system hardware or physiological processes). Statistical techniques that handle the stochastic nature
of the data are commonly used for the detection of activated voxels. Traditional methods of analysis ? which are designed to test the hypothesis that a simple cognitive or sensory stimulus creates
changes in a specific brain area ? are unable to analyze fMRI datasets collected in ?natural stimuli?
where the subjects are bombarded with a multitude of uncontrolled stimuli that cannot always be
quantified [1, 2].
The Experience Based Cognition competition (EBC) [3] offers an opportunity to study complex responses to natural environments, and to test new ideas and new methods for the analysis of fMRI
collected in natural environments. The EBC competition provides fMRI data of three human subjects in three 20-minute segments (704 scanned samples in each segment) in an urban virtual reality
environment along with quantitative time series of natural stimuli or features (25 in total) ranging
from objective features such as the presence of faces to self-reported, subjective cognitive states
such as the experience of fear. During each session, subjects were audibly instructed to complete
three search tasks in the environment: looking for weapons (but not tools) taking pictures of people
with piercings (but not others), or picking up fruits (but not vegetables). The data was collected with
a 3T EPI scanner and typically consists of the activity of 35000 volume elements (voxels) within
the head. The feature time series was provided for only the first two segments (1408 time samples)
and competitive entries are judged on their ability to predict the feature on the third segment (704
time samples, see Fig. 1). At a microscopic level, a large number of internal variables associated
f(t)
k
?
t0
t0
ti
ti
tj
Tl
T
tj
t
Figure 1: We study the variation of the set of features fk (t), k = 1, ? ? ? , K as a function of the
dynamical changes in the fMRI signal X(t) = [x1 (t), ? ? ? , xN (t)] during natural experience. The
features represent both external stimuli such as the presence of faces and internal emotional states
encountered during the exploration of a virtual urban environment (left and right images). We predict
the feature functions fk for t = Tl+1 , ? ? ? T , from the knowledge of the entire fMRI dataset X , and
the partial knowledge of fk (t) for t = 1, ? ? ? , Tl . The ?toy? activation patterns (middle diagram)
illustrate the changes in ?brain states? occurring as a function of time.
with various physical and physiological phenomena contribute to the dynamic changes in the fMRI
signal. Because the fMRI signal is a large scale (as compared to the scale of neurons) measurement
of neuronal activity, we expect that many of these variables will be coupled resulting in a low dimensional set for all possible configurations of the activated fMRI signal. In this work we seek a
low dimensional representation of the entire fMRI dataset that provides a new set of ?voxel-free?
coordinates to study cognitive and sensory features.
We denote a three-dimensional volumes of fMRI composed of a total of N voxels by X(t) =
[x1 (t), ? ? ? , xN (t)]. We have access to T such volumes. We can stack the spatio-temporal fMRI
dataset into a N ? T matrix,
?
?
x1 (1) ? ? ? x1 (T )
?
..
.. ? ,
X = ? ...
(1)
.
. ?
xN (1) ? ? ? xN (T )
where each row n represents a time series xn generated from voxel n and each column j represents
a scan acquired at time tj . We call the set of features to be predicted fk , k = 1, , ? ? ? , K. We are
interested in studying the variation of the set of features fk (t), k = 1, ? ? ? , K describing the subject
experience as a function of the dynamical changes of the brain, as measured by X(t). Formally, we
need to build predictions of fk (t) for t = Tl+1 , ? ? ? T , from the knowledge of the entire fMRI dataset
X , and the partial knowledge of fk (t) for the training time samples t = 1, ? ? ? , Tl (see Fig. 1).
tj
ti
?
t0
?
D
t
Figure 2: Low-dimensional parametrization of the set of ?brain states?. The parametrization is
constructed from the samples provided by the fMRI data at different times, and in different states.
2
A voxel-free parametrization of brain states
We use here the global information provided by the dynamical evolution of X(t) over time, both
during the training times and the test times. We would like to effectively replace each fMRI dataset
X(t) by a small set of features that facilitates the identification of the brain states, and make the
prediction of the features easier. Formally, our goal is to construct a map ? from the voxel space to
low dimensional space.
? : RN 7? D ? RL
(2)
T
X(t) = [x1 (t), ? ? ? , xN (t)] 7? (y1 (t), ? ? ? , yL (t)),
(3)
where L N . As t varies over the training and the test sets, we hope that we explore most of
the possible brain configurations that are useful for predicting the features. The map ? provides a
parametrization of the brain states. Figure 2 provides a pictorial rendition of the map ?. The range
D, represented in Fig. 2 as a smooth surface, is the set of parameters y1 , ? ? ? , yL that characterize
the brain dynamics. Different values of the parameters produce different ?brain states?, associated
with different patterns of activation. Note that time does not play any role on D, and neighboring
points on D correspond to similar brain states. Equipped with this re-parametrization of the dataset
X , the goal is to learn the evolution of the feature time series as a function of the new coordinates
[y1 (t), ? ? ? , yL (t)]T . Each feature function is an implicit function of the brain state measured by
[y1 (t), ? ? ? , yL (t)]. For a given feature fk , the training data provide us with samples of fk at certain locations in D. The map ? is build by globally computing a new parametrization of the set
{X(1), ? ? ? , X(T )}. This parametrization is built into two stages. First, we construct a graph that is
a proxy for the entire set of fMRI data {X(1), ? ? ? , X(T )}. Second, we compute some eigenfunctions ?k defined on the graph. Each eigenfunctions provides one specific coordinate for each node
of the graph.
2.1
The graph of brain states
We represent the fMRI dataset for the training times and test times by a graph. Each vertex i
corresponds to a time sample ti , and we compute the distance between two vertices i and j by
measuring a distance between X(ti ) and X(tj ). Global changes in the signal due to residual head
motion, or global blood flow changes were removed by computing a a principal components analysis
(PCA) of the dataset X and removing a small number components. We then used the l2 distance
between the fMRI volumes (unrolled as N ?1 vectors). This distance compares all the voxels (white
and gray matter, as well as CSF) inside the brain.
2.2
Embedding of the dataset
Once the network of connected brain states is created, we need a distance to distinguish between
strongly connected states (the two fMRI data are in the same cognitive state) and weakly connected
states (the fMRI data are similar, but do not correspond to the same brain states). The Euclidean
distance used to construct the graph is only useful locally: we can use it to compare brain states
that are very similar, but is unfortunately very sensitive to short-circuits created by the noise in the
data. A standard alternative to the geodesic (shortest distance) is provided by the average commute
time, ?(i, j), that quantifies the expected path length between i and j for a random walk started at i.
Formally, ?(i, j) = H(j, i) + H(i, j), where H(i, j) is the hitting time,
H(i, j) = Ei [Tj ] with Tj = min{n ? 0; Zn = j},
for a randomP
walk Zn on the graph with transition probability P, defined by Pi,j = wi,j /di , and
di = Di,i = j wi,j is the degree of the vertex i. The commute time can be conveniently computed
1
1
from the eigenfunctions ?1 , ? ? ? , ?N of N = D 2 PD? 2 , with the eigenvalues ?1 ? ?N ? ? ? ? ?2 <
?1 = 1. Indeed, we have
!2
N
X
1
?k (i) ?k (j)
? ? p
.
?(i, j) =
1 ? ?k
di
dj
k=2
As proposed in [4, 5, 6], we define an embedding
i 7? Ik (i) =
1 ?k (i)
? ,
1 ? ?k di
Because ?1 ? ?N ? ? ? ? ?2 < ?1 = 1, we have
? 1
1??2
k = 2, ? ? ? , N
>
? 1
1??3
(4)
1
> ? ? ? ?1??
. We can therefore
N
k (j)
neglect ??1??
for large k, and reduce the dimensionality of the embedding by using only the first
k
K coordinates in (4). The spectral gap measures the difference between the first two eigenvalues,
?1 ? ?2 = 1 ? ?2 . A large spectral gap indicates that the low dimensional will provide a good
approximation. The algorithm for the construction of the embedding is summarized in Fig. 3.
Algorithm 1: Construction of the embedding
Input:
? X(t), t = 1, ? ? ? , T , K: number of eigenfunctions.
Algorithm:
1. construct the graph defined by the nn nearest neighbors
2. find the first K eigenfunctions, ?k , of N
? Output: For ti = 1 : T
? new co-ordinates of X(ti ): yk (ti ) =
? (i)
?1 ? k
?i 1??k
k = 2, ? ? ? , K + 1
Figure 3: Construction of the embedding
A parameter of the embedding (Fig. 3) is K, the number of coordinates. K can be optimized
by minimizing the prediction error. We expect that for small values of K the embedding will not
describe the data with enough precision, and the prediction will be inaccurate. If K is too large, some
of the new coordinates will be describing the noise in the dataset, and the algorithm will overfit the
training data. Fig. 4-(a) illustrates the effect of K on the performance of the nonlinear dimension
reduction. The quality of the prediction for the features: faces, instruction and velocity is plotted
against K. Instructions elicits a strong response in the auditory cortex that can be decoded with as
few as 20 coordinates. Faces requires more (about 50) dimensions to become optimal. As expected
the performance eventually drops when additional coordinates are used to describe variability that
is not related to the features to be decoded. This confirms our hypothesis that we can replace about
15,000 voxels with 50 appropriately chosen coordinates.
2.3
Semi-supervised learning of the features
The problem of predicting a feature fk at an unknown time tu is formulated as kernel ridge regression problem. The training set {fk (t) for t = 1, ? ? ? , Tl } is used to estimate the optimal choice of
weights in the following model,
f?(tu ) =
Tl
X
?
? (t)K(y(tu ), y(t)),
t=1
where K is a kernel and tu is a time point where we need to predict.
2.4
Results
We compared the nonlinear embedding approach (referred to as global Laplacian) to dimension
reduction obtained with a PCA of X . Here the principal components are principal volumes, and for
each time t we can expand X(t) onto the principal components.
The 1408 training data were divided into two subsets of 704 time samples. We use fk (t) in a subset
to predict fk (t) in the other subset. In order to quantify the stability of the prediction we randomly
selected 85 % of the training set (first subset), and predicted 85 % of the testing set (other subset).
The role, training or testing, of each subset of 704 time samples was also chosen randomly. We
generated 20 experiments for each value of K, the number of predictors. The performance was
quantified with the normalized correlation between the model prediction and the real value of fk ,
q
r = h?fkest (t), ?fk (t)i/ h?(fkest )2 ih?fk2 i,
(5)
where ?fk = fk (t) ? hfk i. Finally, r was averaged over the 20 experiments. Fig. 4-(a) and (b) show
the performance of the nonlinear method and linear method as a function of K. The approach based
on the nonlinear embedding yields very stable results, with low variance. For both global methods
the optimal performance is reached with less than 50 coordinates. Fig. 5 shows the correlation
coefficients for 11 features, using K = 33 coordinates. For most features, the nonlinear embedding
performed better than global PCA.
3
From global to local
While models based on global features leverage predictive components from across the brain, cognitive function is often localized within specific regions. Here we explore whether simple models
based on classical Brodmann regions provide an effective decoder of natural experience. The Brodmann areas were defined almost a century ago (see e.g [7]) and divide the cortex into approximately
50 regions, based on the structure and arrangement of neurons within each region. While the areas are characterized structurally many also have distinct functional roles and we use these roles to
provide useful interpretations of our predictive models. Though the partitioning of cortical regions
remains an open and challenging problem, the Brodmann areas represent a transparent compromise
between dimensionality, function and structure.
Using data supplied by the competition, we warp each brain into standard Talairach coordinates and
locate the Brodmann area corresponding to each voxel. Within each Brodmann region, differing in
size from tens to thousands of elements, we build the covariance matrix of voxel time series using
all three virtual reality episodes. We then project the voxel time series onto the eigenvectors of the
covariance matrix (principal components) and build a simple, linear stimulus decoding model using
the top n modes ranked by their eigenvalues,
fkest (t) =
n
X
wik mki (t).
(6)
i=1
where k indexes the different Brodmann areas, {wik } are the linear weights and {mki (t)} are the
mode time series in each region. The weights are chosen to minimize the RMS error on the training
set and have a particularly simple form here as the modes are decorrelated, wik = hS(t)mki (t)i.
Performance is measured as the normalized correlation r (Eq. 5) between the model prediction and
(a)
0.9
0.9
?r?
?r?
faces
instructions
velocity
0
1
100
global Laplacian
(b)
faces
instructions
velocity
0
1
200
100
global eigenmodes
200
(d)
(c)
48
0.9
Best Area (faces)
Brodmann37
?r?
Brodmann21
Brodmann19
faces
instructions
velocity
0
1
1
30
local eigenmodes
60
1
30
local eigenmodes
60
Figure 4: Performance of the prediction of natural experience for three features, faces, instructions
and velocity as a function of the model dimension. (a) nonlinear embedding, (b) global principal
components, (c) local (Brodmann area) principal components. In all cases we find that the prediction is remarkably low-dimensional, saturating with less than 100 features. (d) stability and interpretability of the optimal Brodmann areas used for decoding the presence of faces. All three areas
are functionally associated with visual processing. Brodmann area 22 (Wernicke?s area) is the best
predictor of instructions (not shown). The connections between anatomy, function and prediction
add an important measure of interpretability to our decoding models.
the real stimulus averaged over the two virtual reality episodes and we use the region with the lowest
training error to make the prediction. In principle, we could use a large number of modes to make a
prediction with n limited only by the number of training samples. In practice the predictive power
of our linear model saturates for a remarkably low number of modes in each region. In Fig 4(c) we
demonstrate the performance of the model on the number of local modes for three stimuli that are
predicted rather well (faces, instructions and velocity).
For many of the well-predicted stimuli, the best Brodmann areas were also stable across subjects and
episodes offering important interpretability. For example, in the prediction of instructions (which
the subjects received through headphones), the top region was Brodmann Area 22, Wernicke?s area,
which has long been associated with the processing of human language. For the prediction of the
face stimulus the best region was usually visual cortex (Brodmann Areas 17 and 19) and for the
prediction of velocity it was Brodmann Area 7, known to be important for the coordination of visual
and motor activity. Using modes derived from Laplacian eigenmaps we were also able to predict an
emotional state, the self-reporting of fear and anxiety. Interestingly, in this case the best predictions
came from higher cognitive areas in frontal cortex, Brodmann Area 11.
While the above discussion highlights the usefulness of classical anatomical location, many aspects
of cognitive experience are not likely to be so simple. Given the reasonable results above it?s natural
0.9
fruits/
veggie
faces
body
instructions velocity
local eigenbrain
global eigenbrain
global laplacian
?r?
0
arousal
dog
fearful/
anxious
hits
interior/
exterior
weapons/
tools
Figure 5: Performance of the prediction of natural experience for eleven features, using three different methods. Local decoders do well on stimuli related to objects while nonlinear global methods
better capture stimuli related to emotion.
to look for ways of combining the intuition derived from single classical location with more global
methods that are likely to do better in prediction. As a step in this direction, we modify our model
to include multiple Brodmann areas
fkest (t) =
n
XX
wil mli (t),
(7)
l?A i=1
where A represents a collection of areas. To make a prediction using the modified model we find the
top three Brodmann areas as before (ranked by their training correlation with the stimulus) and then
incorporate all of the modes in these areas (nA in total) in the linear model of Eq 7. The weights
{wil } are chosen to minimize RMS error on the training data. The combined model leverages both
the interpretive power of single areas and also some of the interactions between them. The results
of this combined predictor are shown in Fig. 5 (black) and are generally significantly better than
the single region predictions. For ease of comparison, we also show the best global results (both
nonlinear Laplacian and global principal components). For many (but not all) of the stimuli, the
local, low-dimensional linear model is significantly better than both linear and nonlinear global
methods.
4
Discussion
Incorporating the knowledge of functional, cortical regions, we used fMRI to build low-dimensional
models of natural experience that performed surprisingly well at predicting many of the complex
stimuli in the EBC competition. In addition, the regional basis of our models allows for transparent
cognitive interpretation, such as the emergence of Wernicke?s area for the prediction of auditory
instructions in the virtual environment. Other well-predicted experiences include the presence of
body parts and faces, both of which were decoded by areas in visual cortex. In future work, it will
be interesting to examine whether there is a well-defined cognitive difference between stimuli that
can be decoded with local brain function and those that appear to require more global techniques.
We also learned in this work that nonlinear methods for embedding datasets, inspired by manifold
learning methods [4, 5, 6], outperform linear techniques in their ability to capture the complex
dynamics of fMRI. Finally, our particular use of Brodmann areas and linear methods represent only
a first step towards combining prior knowledge of broad regional brain function with the construction
of models for the decoding of natural experience. Despite the relative simplicity, an entry based on
this approach scored within the top 5 of the EBC2007 competition [3].
Acknowledgments
GJS was supported in part by National Institutes of Health Grant T32 MH065214 and by the Swartz
Foundation. FGM was partially supported by the Center for the Study of Brain, Mind and Behavior,
Princeton University. The authors are very grateful to all the members of the center for their support
and insightful discussions.
References
[1] Y. Golland, S. Bentin, H. Gelbard, Y. Benjamini, R. Heller, and Y. Nir et al. Extrinsic and
intrinsic systems in the posterior cortex of the human brain revealed during natural sensory
stimulation. Cerebral Cortex, 17:766?777, 2007.
[2] S. Malinen, Y. Hlushchuk, and R. Hari. Towards natural stimulation in fMRI?issues of data
analysis. NeuroImage, 35:131?139, 2007.
[3] http://www.ebc.pitt.edu.
[4] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computations, 15:1373?1396, 2003.
[5] P. B?erard, G. Besson, and S. Gallot. Embeddings Riemannian manifolds by their heat kernel.
Geometric and Functional Analysis, 4(4):373?398, 1994.
[6] R.R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis,
21:5?30, 2006.
[7] E.R. Kandel, J.H. Schwartz, and T.M. Jessell. Principles of Neural Science. McGraw-Hill, New
York, 2000.
| 3320 |@word h:1 middle:1 open:1 instruction:13 confirms:1 seek:2 covariance:2 commute:2 reduction:5 configuration:2 series:9 offering:1 interestingly:1 subjective:1 activation:3 eleven:1 motor:1 designed:1 drop:1 selected:1 parametrization:7 short:1 provides:7 contribute:1 location:3 node:1 along:1 constructed:1 direct:1 become:1 ik:1 consists:1 combine:1 inside:1 coifman:1 acquired:1 indeed:1 expected:2 behavior:3 examine:1 brain:27 inspired:1 globally:1 bentin:1 window:1 equipped:1 provided:4 project:1 xx:1 circuit:1 lowest:1 differing:1 temporal:2 quantitative:1 ti:8 hit:1 schwartz:1 partitioning:1 grant:1 appear:1 before:1 local:10 modify:1 despite:1 path:1 approximately:1 black:1 quantified:2 challenging:1 co:1 ease:1 limited:1 range:1 averaged:2 acknowledgment:1 testing:2 hfk:1 practice:1 area:30 significantly:2 cannot:1 interior:1 onto:2 judged:1 applying:1 www:1 map:5 center:4 simplicity:3 embedding:13 handle:1 stability:2 variation:2 coordinate:12 century:1 construction:4 play:1 colorado:1 hypothesis:2 velocity:9 element:2 particularly:1 role:4 capture:2 thousand:2 region:14 connected:3 episode:3 highest:1 removed:1 yk:1 balanced:1 intuition:1 environment:6 pd:1 wil:2 dynamic:4 geodesic:1 weakly:1 grateful:1 segment:4 compromise:1 predictive:4 upon:1 creates:1 basis:2 various:1 represented:1 anxious:1 epi:1 distinct:1 heat:1 effective:2 describe:2 ability:2 niyogi:1 emergence:1 eigenvalue:3 mli:1 interaction:1 neighboring:1 tu:4 combining:2 competition:5 produce:1 object:1 illustrate:1 measured:3 nearest:1 received:1 eq:2 strong:1 ois:1 predicted:6 quantify:2 direction:1 csf:1 anatomy:1 stochastic:1 exploration:1 human:4 virtual:6 require:1 transparent:3 mki:3 scanner:1 cognition:1 predict:5 pitt:1 coordination:1 sensitive:1 tool:2 hope:1 always:1 modified:1 rather:1 derived:2 indicates:1 nn:1 inaccurate:1 typically:1 entire:4 expand:1 interested:1 issue:1 wernicke:4 resonance:2 emotion:1 construct:4 once:1 represents:3 broad:1 look:1 fmri:29 future:1 contaminated:1 stimulus:18 others:1 few:1 franc:1 belkin:1 randomly:2 composed:1 resulted:1 national:1 pictorial:1 detection:1 headphone:1 venous:1 activated:2 tj:7 partial:2 experience:15 facial:1 euclidean:1 detailing:1 walk:2 re:1 plotted:1 divide:1 arousal:1 column:1 measuring:1 zn:2 vertex:3 entry:2 subset:6 hundred:1 predictor:3 usefulness:1 eigenmaps:3 too:1 characterize:1 reported:1 gallot:1 varies:1 fearful:1 randomp:1 combined:2 physic:1 yl:4 decoding:7 picking:1 na:1 classically:1 cognitive:12 external:2 toy:1 summarized:1 coefficient:1 matter:1 performed:3 analyze:1 reached:1 competitive:1 minimize:2 greg:1 variance:1 correspond:2 yield:1 identification:1 ago:1 decorrelated:2 against:1 invasive:1 associated:4 di:5 riemannian:1 auditory:2 dataset:10 knowledge:6 dimensionality:5 sophisticated:1 higher:1 supervised:1 brodmann:18 response:2 though:1 strongly:1 implicit:1 stage:1 correlation:5 overfit:1 ei:1 nonlinear:11 mode:8 eigenmodes:3 quality:1 gray:1 effect:1 normalized:2 functioning:1 evolution:2 white:1 during:8 self:2 hill:1 complete:1 ridge:1 demonstrate:1 motion:1 oxygen:1 ranging:1 image:1 harmonic:1 functional:5 stimulation:2 physical:1 rl:1 cerebral:2 volume:5 interpretation:3 functionally:1 measurement:1 fk:17 mathematics:1 similarly:1 session:1 benjamini:1 language:1 dj:1 access:2 stable:2 cortex:8 surface:1 add:1 posterior:1 ebc:4 certain:1 came:1 additional:1 shortest:1 swartz:1 signal:7 semi:1 stephen:1 multiple:1 gjs:1 smooth:1 characterized:1 offer:2 long:1 divided:1 equally:1 laplacian:7 prediction:27 regression:1 represent:4 kernel:3 golland:1 addition:1 remarkably:3 diagram:1 appropriately:1 weapon:2 regional:2 eigenfunctions:5 subject:7 facilitates:1 member:1 flow:1 effectiveness:1 call:1 presence:4 leverage:2 revealed:1 enough:1 embeddings:1 reduce:1 idea:1 t0:3 whether:2 pca:3 rms:2 fk2:1 york:1 useful:3 generally:1 vegetable:1 eigenvectors:1 locally:1 ten:1 hardware:1 http:1 supplied:1 outperform:1 extrinsic:1 anatomical:1 blood:1 urban:2 diffusion:1 imaging:4 graph:8 immersed:1 reporting:1 almost:1 reasonable:1 uncontrolled:1 distinguish:1 encountered:1 activity:6 scanned:1 occur:1 t32:1 aspect:1 min:1 department:1 across:2 wi:2 remains:1 describing:2 eventually:1 mind:3 studying:1 spectral:2 magnetic:2 alternative:1 top:5 include:2 opportunity:1 emotional:2 neglect:1 build:7 classical:3 objective:1 arrangement:1 concentration:1 traditional:1 microscopic:1 distance:7 unable:1 elicits:1 fgm:1 decoder:2 manifold:2 collected:3 length:1 index:1 minimizing:1 unrolled:1 anxiety:1 unfortunately:1 unknown:1 contributed:1 neuron:2 datasets:2 gelbard:1 saturates:1 looking:1 head:2 variability:1 y1:4 rn:1 locate:1 stack:1 ordinate:1 dog:1 connection:2 optimized:1 learned:1 able:1 dynamical:4 pattern:2 usually:1 program:1 built:1 including:1 interpretability:4 rendition:1 power:2 natural:15 ranked:2 predicting:3 residual:1 wik:3 picture:1 created:3 started:1 coupled:1 health:1 nir:1 prior:1 voxels:6 l2:1 heller:1 geometric:1 relative:2 expect:2 highlight:1 interesting:1 localized:1 jessell:1 foundation:1 illuminating:1 degree:1 proxy:1 fruit:2 principle:2 pi:1 row:1 surprisingly:2 supported:2 free:2 verbal:2 warp:1 institute:1 neighbor:1 face:14 taking:1 dimension:5 xn:6 transition:1 cortical:2 lafon:1 sensory:4 author:2 commonly:1 instructed:1 collection:1 voxel:8 mcgraw:1 global:20 hari:1 conceptual:1 spatio:1 search:1 quantifies:1 reality:4 nature:1 learn:1 robust:1 exterior:1 complex:5 noise:3 scored:1 body:3 x1:5 fig:10 neuronal:1 referred:1 tl:7 interpretive:1 precision:1 experienced:1 meyer:1 decoded:4 structurally:1 neuroimage:1 kandel:1 third:1 minute:1 removing:1 specific:3 insightful:1 physiological:2 multitude:1 incorporating:1 intrinsic:1 ih:1 sequential:1 effectively:1 illustrates:1 occurring:1 gap:2 easier:1 locality:1 explore:2 likely:2 visual:6 conveniently:1 erard:1 hitting:1 saturating:2 partially:1 fear:2 corresponds:1 talairach:1 goal:2 formulated:1 towards:3 replace:2 change:9 principal:8 total:3 formally:3 internal:3 people:1 support:1 scan:1 frontal:1 incorporate:1 hemodynamic:1 princeton:4 phenomenon:1 |
2,560 | 3,321 | FilterBoost: Regression and Classification on Large
Datasets
Robert E. Schapire
Department of Computer Science
Princeton University
Princeton, NJ 08540
schapire@cs.princeton.edu
Joseph K. Bradley
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
jkbradle@cs.cmu.edu
Abstract
We study boosting in the filtering setting, where the booster draws examples from
an oracle instead of using a fixed training set and so may train efficiently on very
large datasets. Our algorithm, which is based on a logistic regression technique
proposed by Collins, Schapire, & Singer, requires fewer assumptions to achieve
bounds equivalent to or better than previous work. Moreover, we give the first
proof that the algorithm of Collins et al. is a strong PAC learner, albeit within the
filtering setting. Our proofs demonstrate the algorithm?s strong theoretical properties for both classification and conditional probability estimation, and we validate
these results through extensive experiments. Empirically, our algorithm proves
more robust to noise and overfitting than batch boosters in conditional probability
estimation and proves competitive in classification.
1
Introduction
Boosting provides a ready method for improving existing learning algorithms for classification.
Taking a weaker learner as input, boosters use the weak learner to generate weak hypotheses which
are combined into a classification rule more accurate than the weak hypotheses themselves. Boosters
such as AdaBoost [1] have shown considerable success in practice.
Most boosters are designed for the batch setting where the learner trains on a fixed example set.
This setting is reasonable for many applications, yet it requires collecting all examples before training. Moreover, most batch boosters maintain distributions over the entire training set, making them
computationally costly for very large datasets. To make boosting feasible on larger datasets, learners
can be designed for the filtering setting. The batch setting provides the learner with a fixed training
set, but the filtering setting provides an oracle which can produce an unlimited number of labeled
examples, one at a time. This idealized model may describe learning problems with on-line example
sources, including very large datasets which must be loaded piecemeal into memory. By using new
training examples each round, filtering boosters avoid maintaining a distribution over a training set
and so may use large datasets much more efficiently than batch boosters.
The first polynomial-time booster, by Schapire, was designed for filtering [2]. Later filtering boosters
included two more efficient ones proposed by Freund, but both are non-adaptive, requiring a priori
bounds on weak hypothesis error rates and combining weak hypotheses via unweighted majority
votes [3,4]. Domingo & Watanabe?s MadaBoost is competitive with AdaBoost empirically but theoretically requires weak hypotheses? error rates to be monotonically increasing, an assumption we
found to be violated often in practice [5]. Bshouty & Gavinsky proposed another, but, like Freund?s,
their algorithm requires an a priori bound on weak hypothesis error rates [6]. Gavinsky?s AdaFlatfilt
algorithm and Hatano?s GiniBoost do not have these limitations, but the former has worse bounds
than other adaptive algorithms while the latter explicitly requires finite weak hypothesis spaces [7,8].
1
This paper presents FilterBoost, an adaptive boosting-by-filtering algorithm. We show it is applicable to both conditional probability estimation, where the learner predicts the probability of each
label given an example, and classification. In Section 2, we describe the algorithm, after which
we interpret it as a stepwise method for fitting an additive logistic regression model for conditional
probabilities. We then bound the number of rounds and examples required to achieve any target
error in (0, 1). These bounds match or improve upon those for previous filtering boosters but require
fewer assumptions. We also show that FilterBoost can use the confidence-rated predictions from
weak hypotheses described by Schapire & Singer [9].
In Section 3, we give results from extensive experiments. For conditional probability estimation, we
show that FilterBoost often outperforms batch boosters, which prove less robust to overfitting. For
classification, we show that filtering boosters? efficiency on large datasets allows them to achieve
higher accuracies faster than batch boosters in many cases.
FilterBoost is based on a modification of AdaBoost by Collins, Schapire & Singer designed to minimize logistic loss [10]. Their batch algorithm has yet to be shown to achieve arbitrarily low test
error, but we use techniques similar to those of MadaBoost to adapt the algorithm to the filtering setting and prove generalization bounds. The result is an adaptive algorithm with realistic assumptions
and strong theoretical properties. Its robustness and efficiency on large datasets make it competitive
with existing methods for both conditional probability estimation and classification.
2
The FilterBoost Algorithm
Let X be the set of examples and Y a discrete set of labels. For simplicity, assume X is countable, and consider only binary labels Y = {?1, +1}. We assume there exists an unknown target
distribution D over labeled examples (x, y) ? X ? Y from which training and test examples are
generated. The goal in classification is to choose a hypothesis h : X ? Y which minimizes the
classification error PrD [h(x) 6= y], where the subscript indicates that the probability is with respect
to (x, y) sampled randomly from D.
In the batch setting, a booster is given a fixed training set S and a weak learner which, given any
distribution Dt over training examples S, is guaranteed to return a weak hypothesis ht : X ? R
such that the error t ? PrDt [sign(ht (x)) 6= y] < 1/2. For T rounds t, the booster builds a
distribution Dt over S, runs the weak learner on S and Dt , and receives ht . The booster usually then
estimates t using S and weights ht with ?t = ?t (t ). After T rounds, the booster outputs
a final
P
hypothesis H which is a linear combination of the weak hypotheses (e.g. H(x) = t ?t ht (x)).
The sign of H(x) indicates the predicted label y? for x.
Two key elements of boosting are constructing Dt over S and weighting weak hypotheses. Dt is
built such that misclassified examples receive higher weights than in Dt?1 , eventually forcing the
weak learner to classify previously poorly classified examples correctly. Weak hypotheses ht are
generally weighted such that hypotheses with lower errors receive higher weights.
2.1
Boosting-by-Filtering
We describe a general framework for boosting-by-filtering which includes most existing algorithms
as well as our algorithm Filterboost. The filtering setting assumes the learner has access to an
example oracle, allowing it to use entirely new examples sampled i.i.d. from D on each round.
However, while maintaining the distribution Dt is straightforward in the batch setting, there is no
fixed set S on which to define Dt in filtering. Instead, the booster simulates examples drawn from
Dt by drawing examples from D via the oracle and reweighting them according to Dt . Filtering
boosters generally accept each example (x, y) from the oracle for training on round t with probability
proportional to the example?s weight Dt (x, y). The mechanism which accepts examples from the
oracle with some probability is called the filter.
Thus, on each round, a boosting-by-filtering algorithm draws a set of examples from Dt via the
filter, trains the weak learner on this set, and receives a weak hypothesis ht . Though a batch booster
would estimate t using the fixed set S, filtering boosters may use new examples from the filter.
Like batch boosters, filtering boosters may weight ht using ?t = ?t (t ), and they output a linear
combination of h1 , . . . , hT as a final hypothesis.
2
Pt?1
The filtering setting allows the learner Define Ft (x) ? t0 =1 ?t0 ht0 (x)
to estimate the error of Ht to arbitrary Algorithm F ilterBoost accepts Oracle(), ?, ?, ? :
precision by sampling from D via the
For t = 1, 2, 3, . . .
oracle, so FilterBoost does this to de?
?t ?? 3t(t+1)
cide when to stop boosting.
2.2
Call F ilter(t, ?t , ?) to get
mt examples to train WL; get ht
??t0 ?? getEdge(t, ?, ?t , ?)
1/2+?
?0
?t ?? 21 ln 1/2???t0
t
Define Ht (x) = sign Ft+1 (x)
FilterBoost
FilterBoost, given in Figure 1, is modeled after the aforementioned algorithm by Collins et al. [10] and MadaBoost [5]. Given an example oracle,
weak learner, target error ? ? (0, 1),
(Algorithm exits from F ilter() function.)
and confidence parameter ? ? (0, 1)
upper-bounding the probability of fail- Function F ilter(t, ?t , ?) returns (x, y)
ure, it iterates until the current comDefine r = # calls to Filter so far on round t
bined hypothesis Ht has error ? ?.
?t
?t0 ?? r(r+1)
On round t, FilterBoost draws mt exFor (i = 0; i < 2? ln( ?10 ); i = i + 1):
amples from the filter to train the weak
t
learner and get ht . The number mt
(x, y) ?? Oracle()
must be large enough to ensure ht has
1
qt (x, y) ?? 1+eyF
error t < 1/2 with high probabilt (x)
ity. The edge of ht is ?t = 1/2 ? t ,
Return (x, y) with probability qt (x, y)
and this edge is estimated by the funcEnd algorithm; return Ht?1
tion getEdge(), discussed below, and
Function
getEdge(t, ?, ?t , ?) returns ??t0
is used to set ht ?s weight ?t . The current combined
Let m ?? 0, n ?? 0, u ?? 0, ? ?? ?
Pt hypothesis is defined as
Ht = sign( t0 =1 ?t0 ht0 ).
While (|u| < ?(1 + 1/? )):
The F ilter() function generates (x, y)
(x, y) ?? F ilter(t, ?t , ?)
from Dt by repeatedly drawing (x, y)
n ?? n + 1
from the oracle, calculating the weight
m ?? m + I(ht (x) = y)
qt (x, y) ? Dt (x, y), and accepting
u ?? m/n ? 1/2
(x, y) with probability qt (x, y).
p
? ?? (1/2n) ln(n(n + 1)/?t )
Function getEdge() uses a modificaReturn u/(1 + ? )
tion of the Nonmonotonic Adaptive
Sampling method of Watanabe [11]
and Domingo, Galvad`a & Watanabe
Figure 1: The algorithm FilterBoost.
[12]. Their algorithm draws an adaptively chosen number of examples from
the filter and returns an estimate ??t of the edge of ht within relative error ? of the true edge ?t with
high probability. The getEdge() function revises this estimate as ??t0 = ??t /(1 + ? ).
2.3
Analysis: Conditional Probability Estimation
We begin our analysis of FilterBoost by interpreting it as an additive model for logistic regression,
for this interpretation will later aid in the analysis for classification. Such models take the form
X
1
Pr[y = 1|x]
=
ft (x) = F (x),
which implies
Pr[y = 1|x] =
log
?F (x)
Pr[y = ?1|x]
1
+
e
t
where, for FilterBoost, ft (x) = ?t ht (x). Dropping subscripts, we can write the expected negative
log likelihood of example (x, y) after round t as
h
i
1
?y(F (x)+?h(x))
=
E
ln
1
+
e
?(Ft + ?t ht ) = ?(F + ?h) = E ? ln
.
1 + e?y(F (x)+?h(x))
Taking a similar approach to the analysis of AdaBoost in [13], we show in the following theorem
that FilterBoost performs an approximate stepwise minimization of this negative log likelihood. The
proof is in the Appendix.
3
Theorem 1 Define the expected negative log likelihood ?(F + ?h) as above. Given F , FilterBoost
chooses h to minimize a second-order Taylor expansion of ? around h = 0. Given this h, it then
chooses ? to minimize an upper bound of ?.
The batch booster given by Collins et al. [10] which FilterBoost is based upon is guaranteed to
converge to the minimum of this objective when working over a finite sample. Note that FilterBoost uses weak learners which are simple classifiers to perform regression. AdaBoost too may
1
be interpreted as an additive logistic regression model of the form Pr[y = 1|x] = 1+e?2F
(x) with
E[exp(?yF (x))] as the optimization objective [13].
2.4
Analysis: Classification
In this section, we interpret FilterBoost as a traditional boosting algorithm for classification and
prove bounds on its generalization error. We first give a theorem relating errt , the error rate of Ht
over the target distribution D, to pt , the probability with which the filter accepts a random example
generated by the oracle on round t.
Theorem 2 Let errt = PrD [Ht (x) 6= y], and let pt = ED [qt (x, y)]. Then errt ? 2pt .
Proof:
errt
= PrD [Ht (x) 6= y] = PrD [yFt?1 (x) ? 0]
= PrD [qt (x, y) ? 1/2] ? 2 ? ED [qt (x, y)]
= 2pt (using Markov?s inequality above)
We next use the expected negative log likelihood ? from Section 2.3 as an auxiliary function to aid
in bounding the requiredP
number of boosting rounds. Viewing ? as a function of the boosting round
t, we can write ?t = ? (x,y) D(x, y) ln(1 ? qt (x, y)). Our goal is then to minimize ?t , and the
following lemma captures the learner?s progress in terms of the decrease in ?t on each round. This
lemma assumes edge estimates returned by getEdge() are exact, i.e. ??t0 = ?t , which leads to a
simpler bound on T in Theorem 3. We then consider the error in edge estimates and give a revised
bound in Lemma 2 and Theorem 5. The proofs of Lemmas 1 and 2 are in the Appendix.
Lemma
1 Assume for all t that ?t
P
? (x,y) D(x, y) ln(1 ? qt (x, y)). Then
6=
0 and ?t is estimated exactly.
?t ? ?t+1 ? pt
Let ?t
=
q
2
1 ? 2 1/4 ? ?t .
Combining Theorem 2, which bounds the error of the current combined hypothesis in terms of pt ,
with Lemma 1 gives the following upper bound on the required rounds T .
Theorem 3 Let ? = mint |?t |, and let ? be the target error. Given Lemma 1?s assumptions, if
FilterBoost runs
2 ln(2)
T >
p
? 1 ? 2 1/4 ? ? 2
rounds, then errt < ? for some t, 1 ? t ? T . In particular, this is true for T >
ln(2)
2?? 2 .
Proof: For all (x, y), since F1 (x, y) = 0, then q1 (x, y) = 1/2 and ?1 = ln(2). Now, suppose
errt ? ?, ?t ? {1, ..., T }. Then, from Theorem 2, pt ? ?/2, so Lemma 1 gives
p
1
?t ? ?t+1 ? ? 1 ? 2 1/4 ? ? 2
2
PT
Unraveling this recursion as t=1 (?t ? ?t+1 ) = ?1 ? ?T +1 ? ?1 gives
T ?
2 ln(2)
.
p
? 1 ? 2 1/4 ? ? 2
4
So, errt ? ?, ?t ? {1, ..., T } is contradicted if T exceeds the
? theorem?s lower bound. The simplified
bound follows from the first bound via the inequality 1 ? 1 ? x ? x for x ? [0, 1].
Theorem 3 shows FilterBoost can reduce generalization error to any ? ? (0, 1), but we have thus far
overlooked the probabilities of failure introduced by three steps: training the weak learner, deciding
when to stop boosting, and estimating edges. We bound the probability of each of these steps failing
?
on round t with a confidence parameter ?t = 3t(t+1)
so that a simple union bound ensures the
probability of some step failing to be at most FilterBoost?s confidence parameter ?. Finally, we
revise Lemma 1 and Theorem 3 to account for error in estimating edges.
The number mt of examples the weak learner trains on must be large enough to ensure weak hypothesis ht has a non-zero edge and should be set according to the choice of weak learner.
To decide when to stop boosting (i.e. when errt ? ?), we can use Theorem 2, which upper-bounds
the error of the current combined hypothesis Ht in terms of the probability pt that F ilter() accepts
a random example from the oracle. If the filter rejects enough examples in a single call, we know pt
is small, so Ht is accurate enough. Theorem 4 formalizes this intuition; the proof is in the Appendix.
Theorem 4 In a single call to F ilter(t), if n examples have been rejected, where n ?
then errt ? ? with probability at least 1 ? ?t0 .
2
?
ln(1/?t0 ),
Theorem 4 provides a stopping condition which is checked on each call to F ilter(). Each check
?t
on the rth call to F ilter() so that a union bound
may fail with probability at most ?t0 = r(r+1)
ensures FilterBoost stops prematurely on round t with probability at most ?t . Theorem 4 uses a
similar argument to that used for MadaBoost, giving similar stopping criteria for both algorithms.
We estimate weak hypotheses? edges ?t using the Nonmonotonic Adaptive Sampling (NAS) algorithm [11,12] used by MadaBoost. To compute an estimate ??t of the true edge ?t within relative error
)2
1
? ? (0, 1) with probability ? 1 ? ?t , the NAS algorithm uses at most 2(1+2?
(? ?t )2 ln( ? ?t ?t ) filtered
examples. With this guarantee on edge estimates, we can rewrite Lemma 1 as follows:
LemmaP
2 Assume for all t that ?t 6= 0 and ?t is estimated to within ? ? (0, 1) relative error. Let
?t = ? (x,y) D(x, y) ln(1 ? qt (x, y)). Then
s
2 !
1??
2
.
?t ? ?t+1 ? pt 1 ? 2 1/4 ? ?t
1+?
Using Lemma 2, the following theorem modifies Theorem 3 to account for error in edge estimates.
Theorem 5 Let ? = mint |?t |. Let ? be the target error. Given Lemma 2?s assumptions, if FilterBoost runs
2 ln(2)
q
T >
2
? 1 ? 2 1/4 ? ? 2 ( 1??
)
1+?
rounds, then errt < ? for some t, 1 ? t ? T .
The bounds from Theorems 3 and 5 show FilterBoost requires at most O(??1 ? ?2 ) boosting rounds.
MadaBoost [5], which we test in our experiments, resembles FilterBoost but uses truncated exponential weights qt (x, y) = min{1, exp(yFt?1 (x))} instead of the logistic weights qt (x, y) =
(1 + exp(yFt (x)))?1 used by FilterBoost. The algorithms? analyses differ, with MadaBoost requiring the edges ?t to be monotonically decreasing, but both lead to similar bounds on the number of
rounds T proportional to ??1 . The non-adaptive filtering boosters of Freund [3,4] and of Bshouty
& Gavinsky [6] and the batch booster AdaBoost [1] have smaller bounds on T , proportional to
log(??1 ). However, we can use boosting tandems, a technique used by Freund [4] and Gavinsky
[7], to create a filtering booster with T bounded by O(log(??1 )? ?2 ). Following Gavinsky, we can
use FilterBoost to boost the accuracy of the weak learner to some constant and, in turn, treat FilterBoost as a weak learner and use an algorithm from Freund to achieve any target error. As with
AdaFlatfilt , boosting tandems turn FilterBoost into an adaptive booster with a bound on T proportional to log(??1 ). (Without boosting tandems, AdaFlatfilt requires T ? ??2 rounds.) Note,
however, that boosting tandems result in more complicated final hypotheses.
5
An alternate bound for FilterBoost may be derived using techniques from Shalev-Shwartz & Singer
[14]. They use the framework of convex repeated games to define a general method for bounding the
performance of online and boosting algorithms. For FilterBoost, their techniques, combined with
Theorem 2, give a bound similar to that in Theorem 3 but proportional to ??2 instead of ??1 .
Schapire & Singer [9] show AdaBoost benefits from confidence-rated predictions, where weak hypotheses return predictions whose absolute values indicate confidence. These values are chosen
to greedily minimize AdaBoost?s exponential loss function over training data, and this aggressive
weighting can result in faster learning. FilterBoost may use confidence-rated predictions in an identical manner. In the proof of Lemma 1, the decrease in the negative log
P likelihood ?t of the data
(relative to Ht and the target distribution D) is lower-bounded by pt ?pt (x,y) Dt (x, y)e??t yht (x) .
Since pt is fixed, maximizing this bound is equivalent to minimizing the exponential loss over Dt .
3
Experiments
Vanilla FilterBoost accepts examples (x, y) from the oracle with probability qt (x, y), but it may
instead accept all examples and weight each with qt (x, y). Weighting instead of filtering examples
increases accuracy but also increases the size of the training set passed to the weak learner. For
efficiency, we choose to filter when training the weak learner but weight when estimating edges
?t . We also modify FilterBoost?s getEdge() function for efficiency. The Nonmonotonic Adaptive
Sampling (NAS) algorithm used to estimate edges ?t uses many examples, but using several orders
of magnitude fewer sacrifices little accuracy. The same is true for MadaBoost. In all tests, we use
Cn log(t + 1) examples to estimate ?t , where Cn = 300 and the log factor scales the number as
the NAS algorithm would. For simplicity, we train weak learners with Cn log(t + 1) examples as
well. These modifications mean ? (error in edge estimates) and ? (confidence) have no effect on our
tests. To simulate an oracle, we randomly permute the data and use examples in the new order. In
practice, filtering boosters can achieve higher accuracy by cycling through training sets again instead
of stopping once examples are depleted, and we use this ?recycling? in our tests.
We test FilterBoost with and without confidence-rated predictions (labeled ?(C-R)? in our results).
We compare FilterBoost against MadaBoost [5], which does not require an a priori bound on weak
hypotheses? edges and has similar bounds without the complication of boosting tandems. We implement MadaBoost with the same modifications as FilterBoost. We test FilterBoost against two batch
boosters: the well-studied and historically successful AdaBoost [1] and the algorithm from Collins
et al. [10] which is essentially a batch version of FilterBoost (labeled ?AdaBoost-LOG?). We test
both with and without confidence-rated predictions as well as with and without resampling (labeled
?(resamp)?). In resampling, the booster trains weak learners on small sets of examples sampled from
the distribution Dt over the training set S rather than on the entire set S, and this technique often
increases efficiency with little effect on accuracy. Our batch boosters use sets of size Cm log(t + 1)
for training, like the filtering boosters, but use all of S to estimate edges ?t since this can be done
efficiently. We test the batch boosters using confidence-rated predictions and resampling in order to
compare FilterBoost with batch algorithms optimized for the efficiency which boosting-by-filtering
claims as its goal.
We test each booster using decision stumps and decision trees as weak learners to discern the effects
of simple and complicated weak hypotheses. The decision stumps minimize training error, and the
decision trees greedily maximize information gain and are pruned using 1/3 of the data. Both weak
learners minimize exponential loss when outputing confidence-rated predictions.
We use four datasets, described in the Appendix. Briefly, we use two synthetic sets: Majority
(majority vote) and Twonorm [15], and two real sets from the UCI Machine Learning Repository
[16]: Adult (census data; from Ron Kohavi) and Covertype (forestry data with 7 classes merged to
2; Copyr. Jock A. Blackard & Colorado State U.). We average over 10 runs, using new examples
for synthetic data (with 50,000 test examples except where stated) and cross validation for real data.
Figure 2 compares the boosters? runtimes. As expected, filtering boosters run slower per round than
batch boosters on small datasets but much faster on large ones. Interestingly, filtering boosters take
longer on very small datasets in some cases (not shown), for the probability the filter accepts an
example quickly shrinks when the booster has seen that example many times.
6
Figure 2: Running times: Ada/Filter/MadaBoost. Majority; WL = stumps.
3.1
Results: Conditional Probability Estimation
In Section 2.3, we discussed the interpretation of FilterBoost and AdaBoost as stepwise algorithms for conditional probability estimation. We
test both algorithms and the variants
discussed above on all four datasets.
We do not test MadaBoost, as it is
not clear how to use it to estimate
conditional probabilities. As Figure
3 shows, both FilterBoost variants
are competitive with batch algorithms
when boosting decision stumps. With
decision trees, all algorithms except
for FilterBoost overfit badly, including FilterBoost(C-R). In each plot,
we compare FilterBoost with the best
of AdaBoost and AdaBoost-LOG:
AdaBoost was best with decision
stumps and AdaBoost-LOG with de- Figure 3: Log (base e) loss & root mean squared error
cisions trees. For comparison, batch (RMSE). Majority; 10,000 train exs.
logistic regression via gradient de- Left two: WL = stumps (FilterBoost vs. AdaBoost);
scent achieves RMSE 0.3489 and log Right two: WL = trees (FilterBoost vs. AdaBoost-LOG).
(base e) loss .4259; FilterBoost, interpretable as a stepwise method for logistic regression, seems to be approaching these asymptotically.
On Adult and Twonorm, FilterBoost generally outperforms the batch boosters, which tend to overfit
when boosting decision trees, though AdaBoost slightly outperforms FilterBoost on smaller datasets
when boosting decision stumps.
The Covertype dataset is an exception to our results and highlights a danger in filtering and in
resampling for batch learning: the complicated structure of some datasets seems to require decision
trees to train on the entire dataset. With decision stumps, the filtering boosters are competitive,
yet only the non-resampling batch boosters achieve high accuracies with decision trees. The first
decision tree trained on the entire training set achieves about 94% accuracy, which is unachievable
by any of the filtering or resampling batch boosters when using Cm = 300 as the base number of
examples for training the weak learner. To compete with non-resampling batch boosters, the other
boosters must use Cm on the order of 105 , by which point they become very inefficient.
3.2
Results: Classification
Vanilla FilterBoost and MadaBoost perform similarly in classification (Figure 4). Confidence-rated
predictions allow FilterBoost to outperform MadaBoost when using decision stumps but sometimes
cause FilterBoost to perform poorly with decision trees. Figure 5 compares FilterBoost with the
best batch booster for each weak learner. With decision stumps, all boosters achieve higher accuracies with the larger dataset, on which filtering algorithms are much more efficient. Majority is
represented well as a linear combination of decision stumps, so the boosters all learn more slowly
7
Figure 4: FilterBoost vs. MadaBoost.
Figure 5: FilterBoost vs. AdaBoost & AdaBoost-LOG. Majority.
when using the overly complicated decision trees. However, this problem generally affects filtering
boosters less than most batch variants, especially on larger datasets. Adult and Twonorm gave similar results. As in Section 3.1, filtering and resampling batch boosters perform poorly on Covertype.
Thus, while FilterBoost is competitive in classification, its best performance is in regression.
References
[1] Freund, Y., & Schapire, R. E. (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119-139.
[2] Schapire., R. E. (1990) The strength of weak learnability. Machine Learning, 5(2), pp. 197-227.
[3] Freund, Y. (1995) Boosting a weak learning algorithm by majority. Information and Computation, 121, pp.
256-285.
[4] Freund, Y. (1992) An improved boosting algorithm and its implications on learning complexity. 5th Annual
Conference on Computational Learning Theory, pp. 391-398.
[5] Domingo, C., & Watanabe, O. (2000) MadaBoost: a modification of AdaBoost. 13th Annual Conference
on Computational Learning Theory, pp. 180-189.
[6] Bshouty, N. H., & Gavinsky, D. (2002) On boosting with polynomially bounded distributions. Journal of
Machine Learning Research, 3, pp. 483-506.
[7] Gavinsky, D. (2003) Optimally-smooth adaptive boosting and application to agnostic learning. Journal of
Machine Learning Research, 4, pp. 101-117.
[8] Hatano, K. (2006) Smooth boosting using an information-based criterion. 17th International Conference
on Algorithmic Learning Theory, pp. 304-319.
[9] Schapire, R. E., & Singer, Y. (1999) Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37, 297-336.
[10] Collins, M., Schapire, R. E., & Singer, Y. (2002) Logistic regression, AdaBoost and Bregman distances.
Machine Learning, 48, pp. 253-285.
[11] Watanabe, O. (2000) Simple sampling techniques for discovery science. IEICE Trans. Information and
Systems, E83-D(1), 19-26.
[12] Domingo, C., Galvad`a, R., & Watanabe, O. (2002) Adaptive sampling methods for scaling up knowledge
discovery algorithms. Data Mining and Knowledge Discovery, 6, pp. 131-152.
[13] Friedman, J., Hastie, T., & Tibshirani, R. (2000) Additive logistic regression: a statistical view of boosting.
The Annals of Statistics, 28, 337-407.
[14] Shalev-Shwartz, S., & Singer, Y. (2006) Convex repeated games and Fenchel duality. Advances in Neural
Information Processing Systems 20.
[15] Breiman, L. (1998) Arcing classifiers. The Annals of Statistics, 26, pp. 801-849.
[16] Newman, D. J., Hettich, S., Blake, C. L., & Merz, C. J. (1998) UCI Repository of machine learning
databases [http://www.ics.uci.edu/?mlearn/MLRepository.html]. Irvine, CA: U. of California, Dept. of Information & Computer Science.
8
| 3321 |@word repository:2 briefly:1 version:1 polynomial:1 seems:2 q1:1 forestry:1 interestingly:1 outperforms:3 existing:3 bradley:1 current:4 yet:3 must:4 realistic:1 additive:4 designed:4 plot:1 interpretable:1 resampling:8 v:4 fewer:3 accepting:1 filtered:1 provides:4 boosting:33 iterates:1 complication:1 ron:1 simpler:1 become:1 prove:3 fitting:1 scent:1 manner:1 theoretically:1 sacrifice:1 expected:4 themselves:1 decreasing:1 little:2 increasing:1 tandem:5 begin:1 estimating:3 moreover:2 bounded:3 agnostic:1 cm:3 interpreted:1 minimizes:1 nj:1 formalizes:1 guarantee:1 collecting:1 exactly:1 classifier:2 before:1 treat:1 modify:1 subscript:2 ure:1 resembles:1 studied:1 practice:3 union:2 implement:1 danger:1 filterboost:58 reject:1 confidence:14 get:3 www:1 equivalent:2 maximizing:1 modifies:1 straightforward:1 convex:2 simplicity:2 madaboost:16 rule:1 ity:1 annals:2 target:8 pt:16 suppose:1 colorado:1 exact:1 us:6 hypothesis:27 domingo:4 pa:1 element:1 predicts:1 labeled:5 database:1 ft:5 capture:1 ensures:2 contradicted:1 decrease:2 intuition:1 complexity:1 trained:1 rewrite:1 upon:2 efficiency:6 learner:30 exit:1 represented:1 train:10 describe:3 newman:1 nonmonotonic:3 shalev:2 whose:1 larger:3 drawing:2 yht:1 statistic:2 final:3 online:1 uci:3 combining:2 poorly:3 achieve:8 validate:1 produce:1 qt:14 bshouty:3 progress:1 strong:3 gavinsky:7 auxiliary:1 c:2 predicted:1 implies:1 indicate:1 differ:1 merged:1 filter:11 viewing:1 require:3 f1:1 generalization:4 around:1 blake:1 ic:1 exp:3 deciding:1 algorithmic:1 claim:1 achieves:2 failing:2 estimation:8 applicable:1 label:4 resamp:1 wl:4 create:1 weighted:1 minimization:1 rather:1 avoid:1 breiman:1 arcing:1 derived:1 indicates:2 likelihood:5 check:1 greedily:2 stopping:3 entire:4 accept:2 misclassified:1 classification:16 aforementioned:1 html:1 priori:3 once:1 bined:1 sampling:6 runtimes:1 identical:1 randomly:2 maintain:1 friedman:1 mining:1 implication:1 accurate:2 bregman:1 edge:19 ilter:9 tree:11 taylor:1 theoretical:2 eyf:1 fenchel:1 classify:1 ada:1 successful:1 too:1 learnability:1 optimally:1 synthetic:2 combined:5 adaptively:1 chooses:2 international:1 twonorm:3 quickly:1 na:4 again:1 squared:1 choose:2 slowly:1 worse:1 booster:52 inefficient:1 return:7 prdt:1 account:2 aggressive:1 amples:1 de:3 stump:11 includes:1 explicitly:1 idealized:1 later:2 h1:1 tion:2 root:1 view:1 competitive:6 complicated:4 rmse:2 minimize:7 accuracy:9 loaded:1 efficiently:3 weak:41 classified:1 mlearn:1 ed:2 checked:1 failure:1 against:2 pp:10 proof:8 sampled:3 stop:4 gain:1 dataset:3 irvine:1 revise:2 knowledge:2 higher:5 dt:17 adaboost:22 improved:2 done:1 though:2 shrink:1 rejected:1 until:1 overfit:2 working:1 receives:2 reweighting:1 logistic:10 yf:1 ieice:1 effect:3 requiring:2 true:4 former:1 round:23 game:2 mlrepository:1 criterion:2 outputing:1 theoretic:1 demonstrate:1 performs:1 interpreting:1 mt:4 empirically:2 discussed:3 interpretation:2 relating:1 interpret:2 rth:1 mellon:1 e83:1 vanilla:2 similarly:1 hatano:2 access:1 longer:1 base:3 mint:2 forcing:1 inequality:2 binary:1 success:1 arbitrarily:1 seen:1 minimum:1 yft:3 converge:1 maximize:1 monotonically:2 exceeds:1 smooth:2 match:1 faster:3 adapt:1 cross:1 prediction:10 variant:3 regression:11 essentially:1 cmu:1 jock:1 sometimes:1 receive:2 source:1 kohavi:1 tend:1 simulates:1 call:6 depleted:1 enough:4 affect:1 gave:1 hastie:1 approaching:1 reduce:1 cn:3 t0:13 passed:1 returned:1 cause:1 repeatedly:1 generally:4 clear:1 schapire:11 generate:1 outperform:1 http:1 sign:4 estimated:3 lemmap:1 correctly:1 per:1 overly:1 tibshirani:1 carnegie:1 discrete:1 prd:5 dropping:1 write:2 key:1 four:2 drawn:1 ht:30 ht0:2 asymptotically:1 run:5 compete:1 discern:1 reasonable:1 decide:1 hettich:1 draw:4 decision:19 appendix:4 scaling:1 entirely:1 bound:29 guaranteed:2 oracle:15 badly:1 annual:2 strength:1 covertype:3 unachievable:1 unlimited:1 generates:1 simulate:1 argument:1 min:1 pruned:1 department:2 according:2 alternate:1 combination:3 smaller:2 slightly:1 joseph:1 making:1 modification:4 pr:4 census:1 computationally:1 ln:15 previously:1 turn:2 eventually:1 mechanism:1 fail:2 singer:8 know:1 batch:30 robustness:1 slower:1 assumes:2 running:1 ensure:2 maintaining:2 recycling:1 calculating:1 giving:1 prof:2 build:1 especially:1 objective:2 costly:1 traditional:1 unraveling:1 cycling:1 gradient:1 distance:1 majority:8 errt:10 modeled:1 minimizing:1 robert:1 negative:5 stated:1 countable:1 unknown:1 perform:4 allowing:1 upper:4 revised:1 datasets:15 markov:1 finite:2 truncated:1 prematurely:1 arbitrary:1 overlooked:1 introduced:1 required:2 extensive:2 optimized:1 california:1 accepts:6 boost:1 trans:1 adult:3 usually:1 below:1 built:1 including:2 memory:1 recursion:1 improve:1 rated:9 historically:1 ready:1 discovery:3 relative:4 freund:8 loss:6 highlight:1 limitation:1 filtering:34 proportional:5 validation:1 weaker:1 allow:1 taking:2 absolute:1 benefit:1 unweighted:1 adaptive:11 simplified:1 far:2 piecemeal:1 polynomially:1 approximate:1 blackard:1 overfitting:2 pittsburgh:1 shwartz:2 learn:1 robust:2 ca:1 improving:1 permute:1 expansion:1 constructing:1 bounding:3 noise:1 repeated:2 aid:2 precision:1 watanabe:6 exponential:4 weighting:3 theorem:23 pac:1 exists:1 stepwise:4 albeit:1 magnitude:1 conditional:10 goal:3 considerable:1 feasible:1 included:1 except:2 lemma:13 called:1 duality:1 merz:1 vote:2 exception:1 latter:1 collins:7 violated:1 dept:1 princeton:3 ex:1 |
2,561 | 3,322 | Measuring Neural Synchrony by Message Passing
Justin Dauwels
Amari Research Unit
RIKEN Brain Science Institute
Wako-shi, Saitama, Japan
justin@dauwels.com
Franc?ois Vialatte, Tomasz Rutkowski, and Andrzej Cichocki
Advanced Brain Signal Processing Laboratory
RIKEN Brain Science Institute
Wako-shi, Saitama, Japan
{fvialatte,tomek,cia}@brain.riken.jp
Abstract
A novel approach to measure the interdependence of two time series is proposed,
referred to as ?stochastic event synchrony? (SES); it quantifies the alignment of
two point processes by means of the following parameters: time delay, variance
of the timing jitter, fraction of ?spurious? events, and average similarity of events.
SES may be applied to generic one-dimensional and multi-dimensional point processes, however, the paper mainly focusses on point processes in time-frequency
domain. The average event similarity is in that case described by two parameters:
the average frequency offset between events in the time-frequency plane, and the
variance of the frequency offset (?frequency jitter?); SES then consists of five parameters in total. Those parameters quantify the synchrony of oscillatory events,
and hence, they provide an alternative to existing synchrony measures that quantify amplitude or phase synchrony. The pairwise alignment of point processes
is cast as a statistical inference problem, which is solved by applying the maxproduct algorithm on a graphical model. The SES parameters are determined from
the resulting pairwise alignment by maximum a posteriori (MAP) estimation. The
proposed interdependence measure is applied to the problem of detecting anomalies in EEG synchrony of Mild Cognitive Impairment (MCI) patients; the results
indicate that SES significantly improves the sensitivity of EEG in detecting MCI.
1 Introduction
Synchrony is an important topic in neuroscience. For instance, it is hotly debated whether the
synchronous firing of neurons plays a role in cognition [1] and even in consciousness [2]. The synchronous firing paradigm has also attracted substantial attention in both the experimental (e.g., [3])
and the theoretical neuroscience literature (e.g., [4]). Moreover, medical studies have reported that
many neurophysiological diseases (such as Alzheimer?s disease) are often associated with abnormalities in neural synchrony [5, 6].
In this paper, we propose a novel measure to quantify the interdependence between point processes,
referred to as ?stochastic event synchrony? (SES); it consists of the following parameters: time delay,
variance of the timing jitter, fraction of ?spurious? events, and average similarity of the events. The
pairwise alignment of point processes is cast as a statistical inference problem, which is solved
by applying the max-product algorithm on a graphical model [7]. In the case of one-dimensional
point processes, the graphical model is cycle-free and statistical inference is exact, whereas for
1
multi-dimensional point processes, exact inference becomes intractable; the max-product algorithm
is then applied on a cyclic graphical model, which not necessarily yields the optimal alignment [7].
Our experiments, however, indicate that the it finds reasonable alignments in practice. The SES
parameters are determined from the resulting pairwise alignments by maximum a posteriori (MAP)
estimation.
The proposed method may be helpful to detect mental disorders such as Alzheimer?s disease, since
mental disorders are often associated with abnormal blood and neural activity flows, and changes in
the synchrony of brain activity (see, e.g., [5, 6]). In this paper, we will present promising results on
the early prediction of Alzheimer?s disease from EEG signals based on SES.
This paper is organized as follows. In the next section, we introduce SES for the case of onedimensional point processes. In Section 3, we consider the extension to multi-dimensional point
processes. In Section 4, we use our measure to detect abnormalities in the EEG synchrony of
Alzheimer?s disease patients.
2 One-Dimensional Point Processes
Let us consider the one-dimensional point processes (?event strings?) X and X 0 in Fig. 1(a) (ignore
Y and Z for now). We wish to quantify to which extent X and X 0 are synchronized. Intuitively
speaking, two event strings can be considered as synchronous (or ?locked?) if they are identical apart
from: (i) a time shift ?t ; (ii) small deviations in the event occurrence times (?event timing jitter?); (iii)
a few event insertions and/or deletions. More precisely, for two event strings to be synchronous, the
event timing jitter should be significantly smaller than the average inter-event time, and the number
of deletions and insertions should comprise only a small fraction of the total number of events.
This intuitive concept of synchrony is illustrated in Fig. 1(a). The event string X 0 is obtained from
event string X by successively shifting X over ?t (resulting in Y ), slightly perturbing the event
occurrence times (resulting in Z), and eventually, by adding (plus sign) and deleting (minus sign)
events, resulting in X 0 . Adding and deleting events in Z leads to ?spurious? events in X and X 0
(see Fig. 1(a); spurious events are marked in red): a spurious event in X is an event that cannot be
paired with an event in X 0 and vice versa.
The above intuitive reasoning leads to our novel measure for synchrony between two event strings,
i.e., ?stochastic event synchrony? (SES); for the one-dimensional case, it is defined as the triplet (?t ,
st , ?spur ), where st is the variance of the (event) timing jitter, and ?spur is the percentage of spurious
events
0
4 nspur + nspur
,
(1)
?spur =
n + n0
with n and n0 the total number of events in X and X 0 respectively, and nspur and n0spur the total
number of spurious events in X and X 0 respectively. SES is related to the metrics (?distances?)
proposed in [9]; those metrics are single numbers that quantify the synchrony between event strings.
In contrast, we characterize synchrony by means of three parameters, which allows us to distinguish
different types of synchrony (see [10]). We compute those three parameters by performing inference
in a probabilistic model. In order to describe that model, we consider Fig. 1(b), which shows a
symmetric procedure to generate X and X 0 . First, one generates an event string V of length `,
where the events Vk are mutually independent and uniformly distributed in [0, T0 ]. The strings Z
and Z 0 are generated by delaying V over ??t /2 and ?t /2 respectively and by (slightly) perturbing
the resulting event occurrence times (variance of timing jitter equals st /2). The sequences X and
X 0 are obtained from Z and Z 0 by removing some of the events; more precisely, from each pair
(Zk , Zk0 ), either Zk or Zk0 is removed with probability ps .
This procedure amounts to the statistical model:
p(x, x0 , b, b0 , v, ?t , st , `) = p(x|b, v, ?t , st )p(x0 |b0 , v, ?t , st )p(b, b0 |`)p(v|`)p(`)p(?t )p(st ),
(2)
where b and b0 are binary strings that indicate whether the events in X and X 0 are spurious (Bk = 1
if Xk is spurious, Bk = 0 otherwise; likewise for Bk0 ); the length ` has a geometric prior p(`) =
(1 ? ?)?` with ? ? (0, 1), and p(v|`) = T0?` . The prior on the binary strings b and b0 is given by
0
0
0
ntot
p(b, b0 |`) = (1 ? ps )n+n p2`?n?n
= (1 ? ps )n+n ps spur ,
s
2
(3)
with ntot
= nspur + n0spur = 2` ? n ? n0 the total number of spurious events in X and X 0 , nspur =
Pn spur
0
0
k=1 bk = ` ? n the number of spurious events in X, and likewise nspur , the number of spurious
0
0
events in X . The conditional distributions in X and X are equal to:
1?bk
n
Y
? t st
p(x|b, v, ?t , st ) =
N xk ? vik ; ? ,
(4)
2 2
k=1
p(x0 |b0 , v, ?t , st ) =
1?b0k
n0
Y
? t st
,
N x0k ? vi0k ; ,
2 2
(5)
k=1
where Vik is the event in V that corresponds to Xk (likewise Vi0k ), and N (x; m, s) is a univariate
Gaussian distribution with mean m and variance s. Since we do not wish/need to encode prior
information about ?t and st , we adopt improper priors p(?t ) = 1 = p(st ).
Eventually, marginalizing (2) w.r.t. v results in the model:
nnon-spur
Z
Y
0
0
0
0
ntot
spur
p(x, x , b, b , ?t , st , `) = p(x, x , b, b , v, ?t , st , `)dv ? ?
N (x0j 0 ? xjk ; ?t , st ),
k
(6)
k=1
with (xjk , x0j 0 ) the pairs of non-spurious events, nnon-spur = n + n0 ? ` the total number of nonk
q
spurious event pairs, and ? = ps T?0 ; in the example of Fig. 1(b), J = (1, 2, 3, 5, 6, 7, 8),
J 0 = (2, 3, 4, 5, 6, 7, 8), and nnon-spur = 7. In the following, we will denote model (6) by
p(x, x0 , j, j 0 , ?t , st ) instead of p(x, x0 , b, b0 , ?t , st , `), since for given x, x0 , b, and b0 (and hence given
n, n0 , and nnon-spur ), the length ` is fully determined, i.e., ` = n + n0 ? nnon-spur ; moreover, it is more
natural to describe the model in terms of J and J 0 instead of B and B 0 (cf. RHS of (6)). Note that
B and B 0 can directly be obtained from J and J 0 .
It also noteworthy that T0 , ? and ps do not need to be specified individually, since they appear in (6)
only through ?. The latter serves in practice as a knob to control the number of spurious events.
I
B
2
0
34
00
78 9
00 0
5 6
1 0
X
Z
X
V
T0
0
?t
2
Y
Z0
?t
?t
2
Z
X0
X0
B0
1
0 0 0
0
0 00
I0
1
2 3 4
6
7 89
(a) Asymmetric procedure
(b) Symmetric procedure
Figure 1: One-dimensional stochastic event synchrony.
Given event strings X and X 0 , we wish to determine the parameters ?t and st , and the hidden
variables B and B 0 ; the parameter ?spur (cf. (1)) can obtained from the latter :
Pn
P 0
bk + nk=1 b0k
4
?spur = k=1
.
(7)
n + n0
There are various ways to solve this inference problem, but perhaps the most natural one is cyclic
(0)
(0)
maximization: first one chooses initial values ??t and s?t , then one alternates the following two
update rules until convergence (or until the available time has elapsed):
(i) (i)
(?j (i+1) , ?j 0(i+1) ) = argmax p(x, x0 , j, j 0 , ??t , s?t )
(8)
b,b0
(i+1)
(??t
(i+1)
, s?t
) = argmax p(x, x0 , ?j (i+1) , ?j 0(i+1) , ?t , st ).
?t ,st
3
(9)
The update (9) is straightforward, it amounts to the empirical mean and variance, computed over
the non-spurious events. The update (8) can readily be carried out by applying the Viterbi algorithm
(?dynamic programming?) on an appropriate trellis (with the pairs of non-spurious events (xjk , x0j 0 )
k
as states), or equivalently, by applying the max-product algorithm on a suitable factor graph [7]; the
procedure is similar to dynamic time warping [8].
3 Multi-Dimensional Point Processes
In this section, we will focus on the interdependence of multi-dimensional point processes. As a
concrete example, we will consider multi-dimensional point processes in time-frequency domain;
the proposed algorithm, however, is not restricted to that particular situation, it is applicable to
generic multi-dimensional point processes.
Suppose that we are given a pair of (continuous-time) signals, e.g., EEG signals recorded from two
different channels. As a first step, the time-frequency (?wavelet?) transform of each signal is approximated as a sum of (half-ellipsoid) basis functions, referred to as ?bumps? (see Fig. 2 and [17]); each
bump is described by five parameters: time X, frequency F , width ?X, height ?F , and amplitude
W . The resulting bump models Y = ((X1 , F1 , ?X1 , ?F1 , W1 ), . . . , (Xn , Fn , ?Xn , ?Fn , Wn ))
and Y 0 = ((X10 , F10 , ?X10 , ?F10 , W10 ), . . . , (Xn0 0 , Fn0 0 , ?Xn0 0 , ?Fn0 0 , Wn0 0 )), representing the most
prominent oscillatory activity, are thus 5-dimensional point processes. Our extension of stochastic
event synchrony to multi-dimensional point processes (and bump models in particular) is derived
from the following observation (see Fig. 3): bumps in one time-frequency map may not be present
in the other map (?spurious? bumps); other bumps are present in both maps (?non-spurious bumps?),
but appear at slightly different positions on the maps. The black lines in Fig. 3 connect the centers
of non-spurious bumps, and hence, visualize the offset between pairs of non-spurious bumps. We
quantify the interdependence between two bump models by five parameters, i.e., the parameters
?spur , ?t , and st introduced in Section 2, in addition to:
? ?f : the average frequency offset between non-spurious bumps,
? sf : the variance of the frequency offset between non-spurious bumps.
We determine the alignment of two bump models in addition to the 5 above parameters by an inference algorithm similar to the one of Section 2, as we will explain in the following; we will use the
notation ? = (?t , st , ?f , sf ). Model (6) may naturally be extended in time-frequency domain as:
nnon-spur
f0 ? f
x0 ? x
k
k
k0
k0
;
?
,
s
N
;
?
,
s
t t
f f
?xk + ?x0k0
?fk + ?fk0 0
k=1
? p(?t )p st p(?f )p sf ,
tot
p(y, y 0 , j, j 0 , ?) ? ? nspur
x0k0
Y
N
(10)
fk0 0
where the offset
? xk in time and offset
? fk in frequency are normalized by the width and
height respectively of the bumps; we will elaborate on the priors on the parameters ? later on. In
principle, one may determine the sequences J and J 0 and the parameters ? by cyclic maximization
along the lines of (8) and (9). In the multi-dimensional case, however, the update (8) is no longer
tractable: one needs to allow permutations of events, the indices jk and jk0 0 are no longer necessarily
monotonically increasing, and as a consequence, the state space becomes drastically larger. As a
result, the Viterbi algorithm (or equivalently, the max-product algorithm applied on cycle-free factor
graph of model (10)) becomes impractical.
We solve this problem by applying the max-product algorithm on a cyclic factor graph of the system
at hand, which will amount to a suboptimal but practical procedure to obtain pairwise alignments
of multi-dimensional point processes (and bump models in particular). To this end, we introduce a
representation of model (10) that is naturally represented by a cyclic graph: for each pair of events
Yk and Yk00 , we introduce a binary variable Ckk0 that equals one if Yk and Yk00 form pair of nonspurious events and is zero otherwise. Since each event in Y associated to at most one event in Y 0 ,
we have the constraints:
0
n
X
k0 =1
0
4
C1k0 = S1 ? {0, 1},
n
X
0
4
C2k0 = S2 ? {0, 1}, . . . ,
k0 =1
n
X
k0 =1
4
4
Cnk0 = Sn ? {0, 1},
(11)
and similarly, each event in Y 0 is associated to at most one event in Y , which is expressed by a similar
set of constraints. The sequences S and S 0 are related to the sequences B and B 0 (cf. Section 2):
Bk = 1 ? Sk and Bk0 = 1 ? Sk0 . In this representation, the global statistical model (10) can be cast
as:
n
n0
Y
Y
0
0
p(y, y , b, b , c, ?) ?
(??[bk ? 1] + ?[bk ])
(??[b0k ? 1] + ?[b0k ])
k0 =1
k=1
!
f0 ? f
ckk0
x0 ? x
k
k
k0
k0
; ? t , st N
; ? f , sf
p(?t )p st p(?f )p sf
N
0
0
?xk + ?xk0
?fk + ?fk0
0
?
n Y
n
Y
k=1 k0 =1
?
n
Y
0
?[bk +
n
X
0
ckk0
k0 =1
k=1
n
n
X
Y
? 1]
?[b0k0 +
ckk0 ? 1] .
k0 =1
(12)
k=1
Since we do not need to encode prior information about ?t and ?f , we choose improper priors
p(?t ) = 1 = p(?f ). On the other hand, we have prior knowledge about st and sf . Indeed, we expect
a bump in one time-frequency map to appear in the other map at about the same frequency, but there
may be some timing offset between both bumps. For example, bump nr. 1 in Fig. 3(a) (t = 10.7s)
should be paired with bump nr. 3 (t = 10.9s) and not with nr. 2 (t = 10.8s), since the former is much
closer in frequency than the latter. As a consequence, we a priori expect smaller values for sf than
for st . We encode this prior information by means of conjugate priors for st and sf , i.e., scaled
inverse chi-square distributions.
A factor graph of model (14) is shown in Fig. 4 (each edge represents a variable, each node corresponds to a factor of (14), as indicated by the arrows at the right hand side; we refer to [7] for an
introduction to factor graphs). We omitted the edges for the (observed) variables Xk , Xk0 0 , Fk , Fk0 0 ,
?Xk , ?Xk0 0 , ?Fk , and ?Fk0 0 in order not to clutter the figure.
Time-frequency map
Time-frequency map
?
?
Bump model
Bump model
?
Figure 2: Two-dimensional stochastic event synchrony.
We determine the alignment C = (C11 , C12 , . . . , Cnn0 ) and the parameters ? = (?t , st , ?f , sf ) by
maximum a posteriori (MAP) estimation:
? = argmax p(y, y 0 , c, ?),
(?
c, ?)
(13)
c,?
where p(y, y 0 , c, ?) is obtained from (14) by marginalizing over b and b0 :
0
p(y, y , c, ?) ?
n
Y
k=1
?
n
Y
n0
Y
k=1 k0 =1
0
??
n
X
ckk0 + ?
k0 =1
n
X
ckk0 ? 1
k0 =1
??
k0 =1
n
X
n
X
ckk0 + ?
ckk0 ? 1
k=1
!ckk0
k=1
p(?t )p st p(?f )p sf .
(14)
30
1
25
2
3
25
f [Hz]
20
15
10
5
00
n0
Y
x0 ? x
f0 ? f
k
k
k0
k0
N
;
?
,
s
N
;
?
,
s
t
t
f
f
?xk + ?x0k0
?fk + ?fk0 0
30
f [Hz]
0
20
15
10
5
5
10
t [s]
15
00
20
(a) Bump models of two EEG channels.
5
10
t [s]
15
20
(b) Non-spurious bumps (?spur = 27%); the
black lines connect the centers of non-spurious
bumps.
Figure 3: Spurious and non-spurious activity.
5
?
??
?
B20
B1
?
?
0
?
?
B10
?
?
?
...
B2
?
?
?
?
= ...
=
?
Bn0 0
?[bn ] + ??[bn ? 1]
Bn
?
?
?
?
= ...
=
?[bn +
??0
??
Pn0
k0 =1 cnk0
? 1]
??
=
C11
N
= ...
C12
=
C1n0
N ...
N
=
C21
N
C2n0
C22
N ...
??(k)
...
Cn1
N
Cn2
N
=
? = (?t , st , ?f , sf )
=
N
??(k)
Cnn0
N ...
x0n0 ?xn
?xn +?x0n0
ttt
N
0
f ?f
; ?t , st N ?fnn0+?fn0 ; ?f , sf
n0
!cnn0
p(?t , st , ?f , sf ) = p(?t )p(st )p(?f )p(sf )
Figure 4: Factor graph of model (14).
From c?, we obtain the estimate ??spur as:
Pn ?
Pn0 ?0
Pn Pn0
n + n0 ? 2 k=1 k0 =1 c?kk0
k=1 bk +
k=1 bk0
??spur =
=
.
(15)
n + n0
n + n0
The MAP estimate (13) is intractable, and we try to obtain (13) by cyclic maximization: first, the
(0)
(0)
(0)
(0)
parameters ? are initialized: ??t = 0 = ?f , s?t = s?0,t , and s?f = s0,f , then one alternates the
following two update rules until convergence (or until the available time has elapsed):
c?(i+1) = argmax p(y, y 0 , c, ??(i) )
(16)
c
??(i+1) = argmax p(y, y 0 , c?(i+1) , ?).
(17)
?
The estimate ??(i+1) (17) is available in closed-form; indeed, it is easily verified that the point es(i+1)
(i+1)
timates ??t
and ??f
are the (sample) mean of the timing and frequency offset respectively,
(i+1)
computed over all pairs of non-spurious events. The estimates s?t
larly.
(i+1)
and s?f
are obtained simi-
Update (16), i.e., finding the optimal pairwise alignment C for given values ??(i) of the parameters ?,
is less straightforward: it involves an intractable combinatorial optimization problem. We attempt
to solve that problem by applying the max-product algorithm to the (cyclic) factor graph depicted
in Fig. 4 [7]. Let us first point out that, since the alignment C is computed for given ? = ??(i) ,
the (upward) messages along the edges ? are the point estimate ??(i) (cf. (16)); equivalently, for the
purpose of computing (16), one may remove the ? edges and the two bottom nodes in Fig. 4; the
N -nodes then become leaf nodes. The other messages in the graph are iteratively updated according
to the generic max-product update rule [7].
The resulting inference algorithm for computing (16) is summarized in Table 1. The messages
?
??(ckk0 ) and ?? 0 (ckk0 ) propagate upward along the edges ckk0 towards the ?-nodes
connected to
the edges Bk and Bk0 0 respectively (see Fig. 4, left hand side); the messages ??(ckk0 ) and ??0 (ckk0 )
?
propagate downward along the edges ckk0 from the ?-nodes
connected to the edges Bk and Bk0 0
respectively. After initialization (18) of the messages ??(ckk0 ) and ?? 0 (ckk0 ) (k = 1, 2, . . . , n; k 0
= 1, 2, . . . , n0 ), one alternatively updates (i) the messages ??(ckk0 ) (19) and ??0 (ckk0 ) (20), (ii) the
messages ??(ckk0 ) (21) and ?? 0 (ckk0 ) (22), until convergence; it is noteworthy that, although the
max-product algorithm is not guaranteed to converge on cyclic graphs, we observed in our experiments (see Section 4) that alternating the updates (19)?(22) always converged to a fixed point. At
last, one computes the marginals p(ckk0 ) (23), and from the latter, one may determine the decisions
c?kk0 by greedy decimation.
4 Diagnosis of MCI from EEG
We analyzed rest eyes-closed EEG data recorded from 21 sites on the scalp based on the 10?20
system. The sampling frequency was 200 Hz, and the signals were bandpass filtered between 4
6
Initialization
??(ckk0 ) = ??0 (ckk0 ) ?
N
x0 ? x
f0 ? f
k
k
k0
k0
; ? t , st N
; ? f , sf
?xk + ?x0k0
?fk + ?fk0 0
!ckk0
Iteratively compute messages until convergence
A. Downward messages:
??(ckk0 = 0)
max (?, max`0 6=k0 ??(ck`0 = 1)/??(ck`0 = 0))
?
??(ckk0 = 1)
1
0
?? (ckk0 = 0)
max (?, max`6=k ??0 (c`k0 = 1)/??0 (c`k0 = 0))
?
0
?? (ckk0 = 1)
1
(18)
(19)
(20)
B. Upward messages:
!
f0 ? f
ckk0
x0 ? x
k
k
k0
k0
; ? t , st N
; ? f , sf
0
0
?xk + ?xk0
?fk + ?fk0
!
x0 ? x
f0 ? f
ckk0
0
k
k
k
k0
??0 (ckk0 ) ? ??(ckk0 ) N
;
?
,
s
N
;
?
,
s
t t
f f
?xk + ?x0k0
?fk + ?fk0 0
??(ckk0 ) ? ??0 (ckk0 ) N
Marginals
p(ckk0 ) ? ??(ckk0 )??0 (ckk0 ) N
x0 ? x
f0 ? f
k
k
k0
k0
; ? t , st N
; ? f , sf
?xk + ?x0k0
?fk + ?fk0 0
!ckk0
(21)
(22)
(23)
Table 1: Inference algorithm.
and 30Hz. The subjects comprised two study groups: the first consisted of a group of 22 patients
diagnosed as suffering from MCI, who subsequently developed mild AD. The other group was a
control set of 38 age-matched, healthy subjects who had no memory or other cognitive impairments.
Pre-selection was conducted to ensure that the data were of a high quality, as determined by the
presence of at least 20s of artifact free data. We computed a large variety of synchrony measures
for both data sets; the results are summarized in Table 2. We report results for global synchrony,
obtained by averaging the synchrony measures over 5 brain regions (frontal, temporal left and right,
central, occipital). For SES, the bump models were clustered by means of the aggregation algorithm
described in [17].
The strongest observed effect is a significantly higher degree of background noise (?spur ) in MCI
patients, more specifically, a high number of spurious, non-synchronous oscillatory events (p =
0.00021). We verified that the SES measures are not correlated (Pearson r) with other synchrony
measures (p > 0.10); in contrast to the other measures, SES quantifies the synchrony of oscillatory
events (instead of more conventional amplitude or phase synchrony). Combining ?spur with ffDTF
yields good classification of MCI vs. Control patients (see Fig.5(a)). Interestingly, we did not observe a significant effect on the timing jitter st of the non-spurious events (p = 0.91). In other words,
AD seems to be associated with a significant increase of spurious background activity, while the
non-spurious activity remains well synchronized. Moreover, only the non-spurious activity slows
down (p = 0.0012; see Fig.5(c)), the average frequency of the spurious activity is not affected in MCI
patients (see Fig.5(c)). In future work, we will verify those observations by means of additional data
sets.
Measure
Cross-correlation
Coherence
Phase Coherence
Corr-entropy
Wave-entropy
p-value
0.028?
0.060
0.72
0.27
0.012?
References
[16]
[18]
[20]
Measure
Granger coherence
Partial Coherence
PDC
DTF
ffDTF
dDTF
p-value
0.15
0.16
0.60
0.34
0.0012??
0.030?
Measure
Kullback-Leibler
R?enyi
Jensen-Shannon
Jensen-R?enyi
IW
I
p-value
0.072
0.076
0.084
0.12
0.060
0.080
Measure
Nk
Sk
Hk
S-estimator
p-value
0.032?
0.29
0.090
0.33
Wavelet Phase
Evolution Map
Instantaneous Period
0.082
0.072
References
[13]
References
[23]
References
[22]
[15]
Measure
Hilbert Phase
p-value
0.15
References
[21]
[24]
0.020?
[19]
Measure
st
?spur
p-value
0.91
0.00021??
Table 2: Sensitivity of synchrony measures for early prediction of AD (p-values for Mann-Whitney
test; * and ** indicate p < 0.05 and p < 0.005 respectively). N k , S k , and H k are three measures
of nonlinear interdependence [15].
7
MCI
CTR
0.4
0.3
fspur
?spur
0.35
0.25
19
19
18
18
17
17
fnon-spur
0.45
16
15
0.2
14
16
15
14
0.15
13
0.1
0.045
0.05
Fij2
0.055
(a) ?spur vs. ffDTF
0.06
12
13
MCI
CTR
12
CTR
MCI
(b) Av. frequency of the spuri- (c) Av. frequency of the nonous activity (p = 0.87)
spurious activity (p = 0.0019)
Figure 5: Results.
References
[1] F. Varela, J. P. Lachaux, E. Rodriguez, and J. Martinerie, ?The Brainweb: Phase Synchronization and
Large-Scale Integration?, Nature Reviews Neuroscience, 2(4):229?39, 2001.
[2] W. Singer, ?Consciousness and the Binding Problem,? Annals of the New York Academy of Sciences,
929:123?146, April 2001.
[3] M. Abeles, H. Bergman, E. Margalit, and E. Vaadia, ?Spatiotemporal Firing Patterns in the Frontal Cortex
of Behaving Monkeys,? J. Neurophysiol, 70(4):1629?1638. 1993.
[4] S. Amari, H. Nakahara, S. Wu, and Y. Sakai, ?Synchronous Firing and Higher-Order Interactions in Neuron
Pool,? Neural Computation, 15:127?142, 2003.
[5] H. Matsuda, ?Cerebral Blood Flow and Metabolic Abnormalities in Alzheimer?s Disease,? Ann. Nucl. Med.,
vol. 15, pp. 85?92, 2001.
[6] J. Jong, ?EEG Dynamics in Patients with Alzheimer?s Disease,? Clinical Neurophysiology, 115:1490?1505
(2004).
[7] H.-A. Loeliger, ?An Introduction to Factor Graphs,? IEEE Signal Processing Magazine, Jan. 2004, pp. 28?
41.
[8] C. S. Myers and L. R. Rabiner, ?A Comparative Study of Several Dynamic Time-Warping Algorithms for
Connected Word Recognition,? The Bell System Technical Journal, 60(7):1389?1409, September 1981.
[9] J. D. Victor and K. P. Purpura, ?Metric-space Analysis of Spike Trains: Theory, Algorithms, and Application,? Network: Comput. Neural Systems, 8:17, 164, 1997.
[10] H. P. C. Robinson, ?The Biophysical Basis of Firing Variability in Cortical Neurons,? Chapter 6 in Computational Neuroscience: A Comprehensive Approach, Mathematical Biology & Medicine Series, Edited
By Jianfeng Feng, Chapman & Hall/CRC, 2003.
[11] E. Pereda, R. Q. Quiroga, and J. Bhattacharya, ?Nonlinear Multivariate Analysis of Neurophsyiological
Signals,? Progress in Neurobiology, 77 (2005) 1?37.
[12] M. Breakspear, ?Dynamic Connectivity in Neural Systems: Theoretical and Empirical Considerations,?
Neuroinformatics, vol. 2, no. 2, 2004.
[13] M. Kami?nski and Hualou Liang, ?Causal Influence: Advances in Neurosignal Analysis,? Critical Review
in Biomedical Engineering, 33(4):347?430 (2005).
[14] C. J. Stam, ?Nonlinear Dynamical Analysis of EEG and MEG: Review of an Emerging Field,? Clinical
Neurophysiology 116:2266?2301 (2005).
[15] R. Q. Quiroga, A. Kraskov, T. Kreuz, and P. Grassberger, ?Performance of Different Synchronization
Measures in Real Data: A Case Study on EEG Signals,? Physical Review E, vol. 65, 2002.
[16] P. Nunez and R. Srinivasan, Electric Fields of the Brain: The Neurophysics of EEG, Oxford University
Press, 2006.
[17] F. Vialatte, C. Martin, R. Dubois, J. Haddad, B. Quenet, R. Gervais, and G. Dreyfus, ?A Machine Learning
Approach to the Analysis of Time-Frequency Maps, and Its Application to Neural Dynamics,? Neural
Networks, 2007, 20:194?209.
[18] Jian-Wu Xu, H. Bakardjian, A. Cichocki, and J. C. Principe, ?EEG Synchronization Measure: a Reproducing Kernel Hilbert Space Approach,? submitted to IEEE Transactions on Biomedical Engineering
Letters, Sept. 2006.
[19] M. G. Rosenblum, L. Cimponeriu, A. Bezerianos, A. Patzak, and R. Mrowka, ?Identification of Coupling
Direction: Application to Cardiorespiratory Interaction,? Physical Review E, 65 041909, 2002.
[20] C. S. Herrmann, M. Grigutsch, and N. A. Busch, ?EEG Oscillations and Wavelet Analysis,? in Todd
Handy (ed.) Event-Related Potentials: a Methods Handbook, pp. 229-259, Cambridge, MIT Press, 2005.
[21] C. Carmeli, M. G. Knyazeva, G. M. Innocenti, and O. De Feo, ?Assessment of EEG Synchronization
Based on State-Space Analysis,? Neuroimage, 25:339?354 (2005).
[22] A. Kraskov, H. St?ogbauer, and P. Grassberger, ?Estimating Mutual Information,? Phys. Rev. E 69 (6)
066138, 2004.
[23] S. Aviyente, ?A Measure of Mutual Information on the Time-Frequency Plane,? Proc. of ICASSP 2005,
vol. 4, pp. 481?484, March 18?23, 2005, Philadelphia, PA, USA.
[24] J.-P. Lachaux, E. Rodriguez, J. Martinerie, and F. J. Varela, ?Measuring Phase Synchrony in Brain Signals,? Human Brain Mapping 8:194208 (1999).
8
| 3322 |@word mild:2 neurophysiology:2 seems:1 propagate:2 bn:4 minus:1 initial:1 cyclic:8 series:2 loeliger:1 interestingly:1 wako:2 existing:1 com:1 attracted:1 readily:1 tot:1 grassberger:2 fn:2 remove:1 update:9 n0:16 v:2 half:1 leaf:1 greedy:1 plane:2 xk:13 filtered:1 mental:2 detecting:2 node:6 c22:1 five:3 height:2 mathematical:1 along:4 become:1 consists:2 introduce:3 interdependence:6 x0:17 pairwise:6 inter:1 indeed:2 multi:10 brain:9 chi:1 increasing:1 becomes:3 estimating:1 moreover:3 notation:1 matched:1 matsuda:1 kk0:2 string:12 monkey:1 emerging:1 developed:1 finding:1 impractical:1 temporal:1 scaled:1 control:3 unit:1 medical:1 appear:3 engineering:2 timing:9 todd:1 consequence:2 oxford:1 firing:5 noteworthy:2 black:2 plus:1 initialization:2 locked:1 c21:1 practical:1 gervais:1 practice:2 handy:1 cn1:1 procedure:6 jan:1 empirical:2 bell:1 significantly:3 pre:1 word:2 cannot:1 selection:1 applying:6 influence:1 conventional:1 map:14 shi:2 center:2 straightforward:2 attention:1 occipital:1 bn0:1 disorder:2 rule:3 estimator:1 ckk0:39 updated:1 annals:1 play:1 suppose:1 magazine:1 anomaly:1 exact:2 bk0:5 programming:1 larly:1 decimation:1 bergman:1 pa:1 approximated:1 jk:1 recognition:1 asymmetric:1 timates:1 observed:3 role:1 bottom:1 solved:2 region:1 cycle:2 improper:2 connected:3 removed:1 yk:2 substantial:1 disease:7 edited:1 insertion:2 dynamic:6 basis:2 neurophysiol:1 easily:1 icassp:1 k0:29 various:1 represented:1 chapter:1 riken:3 train:1 enyi:2 describe:2 jk0:1 neurophysics:1 jianfeng:1 pearson:1 neuroinformatics:1 larger:1 solve:3 s:14 amari:2 otherwise:2 transform:1 sequence:4 vaadia:1 myers:1 biophysical:1 propose:1 interaction:2 product:8 combining:1 spur:24 academy:1 f10:2 intuitive:2 stam:1 convergence:4 p:6 comparative:1 coupling:1 b0:12 progress:1 p2:1 ois:1 involves:1 indicate:4 hotly:1 quantify:6 synchronized:2 direction:1 stochastic:6 subsequently:1 human:1 mann:1 crc:1 f1:2 clustered:1 extension:2 quiroga:2 considered:1 hall:1 cognition:1 viterbi:2 visualize:1 bump:26 mapping:1 early:2 adopt:1 omitted:1 purpose:1 estimation:3 proc:1 applicable:1 combinatorial:1 iw:1 healthy:1 individually:1 vice:1 mit:1 gaussian:1 always:1 ck:2 martinerie:2 pn:4 knob:1 encode:3 derived:1 focus:2 tomek:1 vk:1 mainly:1 hk:1 contrast:2 detect:2 posteriori:3 inference:9 helpful:1 i0:1 margalit:1 spurious:35 hidden:1 upward:3 classification:1 priori:1 integration:1 mutual:2 equal:3 comprise:1 field:2 sampling:1 chapman:1 identical:1 represents:1 biology:1 future:1 report:1 few:1 franc:1 comprehensive:1 phase:7 argmax:5 b20:1 attempt:1 message:11 alignment:12 analyzed:1 edge:8 closer:1 partial:1 initialized:1 xjk:3 causal:1 theoretical:2 instance:1 measuring:2 whitney:1 maximization:3 deviation:1 saitama:2 comprised:1 delay:2 conducted:1 characterize:1 reported:1 connect:2 spatiotemporal:1 abele:1 chooses:1 nski:1 st:41 pn0:3 sensitivity:2 probabilistic:1 pool:1 concrete:1 kami:1 ctr:3 w1:1 connectivity:1 recorded:2 successively:1 central:1 choose:1 cognitive:2 japan:2 potential:1 rosenblum:1 de:1 c12:2 b2:1 summarized:2 ntot:3 ad:3 later:1 try:1 closed:2 red:1 wave:1 aggregation:1 tomasz:1 ttt:1 synchrony:28 square:1 fn0:3 variance:8 who:2 likewise:3 yield:2 rabiner:1 identification:1 converged:1 submitted:1 oscillatory:4 explain:1 strongest:1 phys:1 ed:1 frequency:25 pp:4 naturally:2 associated:5 knowledge:1 improves:1 organized:1 hilbert:2 amplitude:3 fk0:10 higher:2 april:1 diagnosed:1 biomedical:2 until:6 correlation:1 hand:4 nonlinear:3 assessment:1 rodriguez:2 quality:1 perhaps:1 indicated:1 artifact:1 usa:1 effect:2 concept:1 normalized:1 consisted:1 verify:1 former:1 hence:3 evolution:1 alternating:1 symmetric:2 laboratory:1 consciousness:2 iteratively:2 leibler:1 illustrated:1 width:2 prominent:1 reasoning:1 dreyfus:1 instantaneous:1 novel:3 consideration:1 physical:2 perturbing:2 jp:1 cerebral:1 onedimensional:1 x0n0:2 marginals:2 refer:1 significant:2 versa:1 cambridge:1 fk:10 similarly:1 had:1 f0:7 similarity:3 longer:2 cortex:1 behaving:1 multivariate:1 apart:1 binary:3 victor:1 additional:1 brainweb:1 c11:2 determine:5 paradigm:1 converge:1 monotonically:1 signal:10 ii:2 period:1 ogbauer:1 x10:2 technical:1 cross:1 clinical:2 paired:2 prediction:2 patient:7 metric:3 nunez:1 kernel:1 whereas:1 addition:2 background:2 jian:1 rest:1 dtf:1 hz:4 subject:2 med:1 flow:2 alzheimer:6 presence:1 abnormality:3 kraskov:2 iii:1 wn:1 variety:1 simi:1 suboptimal:1 vik:2 shift:1 synchronous:6 whether:2 t0:4 passing:1 speaking:1 york:1 impairment:2 amount:3 clutter:1 generate:1 percentage:1 sign:2 neuroscience:4 diagnosis:1 vol:4 affected:1 srinivasan:1 group:3 varela:2 blood:2 verified:2 graph:11 fraction:3 sum:1 inverse:1 letter:1 jitter:8 reasonable:1 x0j:3 wu:2 oscillation:1 decision:1 coherence:4 abnormal:1 guaranteed:1 distinguish:1 activity:10 scalp:1 precisely:2 constraint:2 generates:1 performing:1 martin:1 according:1 alternate:2 carmeli:1 march:1 conjugate:1 smaller:2 slightly:3 rev:1 s1:1 intuitively:1 dv:1 restricted:1 mutually:1 remains:1 eventually:2 granger:1 singer:1 tractable:1 serf:1 zk0:2 end:1 vi0k:2 available:3 observe:1 generic:3 appropriate:1 cia:1 occurrence:3 bhattacharya:1 alternative:1 andrzej:1 cf:4 ensure:1 aviyente:1 graphical:4 medicine:1 pdc:1 feng:1 warping:2 spike:1 nr:3 september:1 distance:1 topic:1 extent:1 meg:1 length:3 index:1 ellipsoid:1 equivalently:3 liang:1 slows:1 lachaux:2 xk0:4 av:2 neuron:3 observation:2 situation:1 extended:1 variability:1 delaying:1 neurobiology:1 reproducing:1 bk:12 introduced:1 cast:3 pair:9 specified:1 elapsed:2 deletion:2 robinson:1 justin:2 dynamical:1 pattern:1 haddad:1 max:12 memory:1 deleting:2 shifting:1 event:65 suitable:1 natural:2 critical:1 advanced:1 nucl:1 representing:1 sk0:1 eye:1 dubois:1 carried:1 b10:1 cichocki:2 philadelphia:1 sept:1 sn:1 prior:10 literature:1 geometric:1 review:5 marginalizing:2 x0k:1 synchronization:4 fully:1 expect:2 permutation:1 age:1 cn2:1 degree:1 s0:1 principle:1 metabolic:1 last:1 free:3 drastically:1 side:2 allow:1 institute:2 distributed:1 xn:4 cortical:1 sakai:1 computes:1 herrmann:1 transaction:1 ignore:1 kullback:1 global:2 handbook:1 b1:1 alternatively:1 continuous:1 quantifies:2 triplet:1 sk:2 table:4 purpura:1 promising:1 channel:2 zk:2 nature:1 eeg:15 necessarily:2 electric:1 domain:3 did:1 rh:1 s2:1 arrow:1 noise:1 suffering:1 x1:2 xu:1 fig:16 referred:3 site:1 elaborate:1 trellis:1 position:1 neuroimage:1 wish:3 bandpass:1 sf:17 debated:1 comput:1 wavelet:3 removing:1 z0:1 down:1 jensen:2 offset:9 intractable:3 adding:2 corr:1 downward:2 breakspear:1 nk:2 entropy:2 depicted:1 univariate:1 neurophysiological:1 expressed:1 binding:1 corresponds:2 w10:1 conditional:1 marked:1 nakahara:1 ann:1 towards:1 change:1 determined:4 specifically:1 uniformly:1 averaging:1 total:6 mci:10 experimental:1 xn0:2 e:1 maxproduct:1 shannon:1 jong:1 wn0:1 principe:1 latter:4 frontal:2 correlated:1 |
2,562 | 3,323 | The Tradeoffs of Large Scale Learning
L?eon Bottou
NEC laboratories of America
Princeton, NJ 08540, USA
leon@bottou.org
Olivier Bousquet
Google Z?urich
8002 Zurich, Switzerland
olivier.bousquet@m4x.org
Abstract
This contribution develops a theoretical framework that takes into account the
effect of approximate optimization on learning algorithms. The analysis shows
distinct tradeoffs for the case of small-scale and large-scale learning problems.
Small-scale learning problems are subject to the usual approximation?estimation
tradeoff. Large-scale learning problems are subject to a qualitatively different
tradeoff involving the computational complexity of the underlying optimization
algorithms in non-trivial ways.
1
Motivation
The computational complexity of learning algorithms has seldom been taken into account by the
learning theory. Valiant [1] states that a problem is ?learnable? when there exists a probably approximatively correct learning algorithm with polynomial complexity. Whereas much progress has been
made on the statistical aspect (e.g., [2, 3, 4]), very little has been told about the complexity side of
this proposal (e.g., [5].)
Computational complexity becomes the limiting factor when one envisions large amounts of training
data. Two important examples come to mind:
? Data mining exists because competitive advantages can be achieved by analyzing the
masses of data that describe the life of our computerized society. Since virtually every
computer generates data, the data volume is proportional to the available computing power.
Therefore one needs learning algorithms that scale roughly linearly with the total volume
of data.
? Artificial intelligence attempts to emulate the cognitive capabilities of human beings. Our
biological brains can learn quite efficiently from the continuous streams of perceptual data
generated by our six senses, using limited amounts of sugar as a source of power. This
observation suggests that there are learning algorithms whose computing time requirements
scale roughly linearly with the total volume of data.
This contribution finds its source in the idea that approximate optimization algorithms might be
sufficient for learning purposes. The first part proposes new decomposition of the test error where
an additional term represents the impact of approximate optimization. In the case of small-scale
learning problems, this decomposition reduces to the well known tradeoff between approximation
error and estimation error. In the case of large-scale learning problems, the tradeoff is more complex because it involves the computational complexity of the learning algorithm. The second part
explores the asymptotic properties of the large-scale learning tradeoff for various prototypical learning algorithms under various assumptions regarding the statistical estimation rates associated with
the chosen objective functions. This part clearly shows that the best optimization algorithms are not
necessarily the best learning algorithms. Maybe more surprisingly, certain algorithms perform well
regardless of the assumed rate for the statistical estimation error.
2
2.1
Approximate Optimization
Setup
Following [6, 2], we consider a space of input-output pairs (x, y) ? X ? Y endowed with a probability distribution P (x, y). The conditional distribution P (y|x) represents the unknown relationship
between inputs and outputs. The discrepancy between the predicted output y? and the real output
y is measured with a loss function ?(?
y , y). Our benchmark is the function f ? that minimizes the
expected risk
Z
E(f ) = ?(f (x), y) dP (x, y) = E [?(f (x), y)],
that is,
f ? (x) = arg min E [ ?(?
y , y)| x].
y?
Although the distribution P (x, y) is unknown, we are given a sample S of n independently drawn
training examples (xi , yi ), i = 1 . . . n. We define the empirical risk
n
En (f ) =
1X
?(f (xi ), yi ) = En [?(f (x), y)].
n i=1
Our first learning principle consists in choosing a family F of candidate prediction functions and
finding the function fn = arg minf ?F En (f ) that minimizes the empirical risk. Well known combinatorial results (e.g., [2]) support this approach provided that the chosen family F is sufficiently
restrictive. Since the optimal function f ? is unlikely to belong to the family F, we also define
fF? = arg minf ?F E(f ). For simplicity, we assume that f ? , fF? and fn are well defined and unique.
We can then decompose the excess error as
E [E(fn ) ? E(f ? )] = E [E(fF? ) ? E(f ? )] + E [E(fn ) ? E(fF? )] = Eapp + Eest ,
(1)
where the expectation is taken with respect to the random choice of training set. The approximation
error Eapp measures how closely functions in F can approximate the optimal solution f ? . The
estimation error Eest measures the effect of minimizing the empirical risk En (f ) instead of the
expected risk E(f ). The estimation error is determined by the number of training examples and by
the capacity of the family of functions [2]. Large families1 of functions have smaller approximation
errors but lead to higher estimation errors. This tradeoff has been extensively discussed in the
literature [2, 3] and lead to excess error that scale between the inverse and the inverse square root of
the number of examples [7, 8].
2.2
Optimization Error
Finding fn by minimizing the empirical risk En (f ) is often a computationally expensive operation.
Since the empirical risk En (f ) is already an approximation of the expected risk E(f ), it should
not be necessary to carry out this minimization with great accuracy. For instance, we could stop an
iterative optimization algorithm long before its convergence.
Let us assume that our minimization algorithm returns an approximate solution f?n such that
En (f?n ) < En (fn ) + ?
where ? ? 0 is a predefined tolerance. An additional term Eopt = E E(f?n ) ? E(fn ) then appears
in the decomposition of the excess error E = E E(f?n ) ? E(f ? ) :
E = E [E(fF? ) ? E(f ? )] + E [E(fn ) ? E(fF? )] + E E(f?n ) ? E(fn )
= Eapp + Eest + Eopt .
(2)
We call this additional term optimization error. It reflects the impact of the approximate optimization
on the generalization performance. Its magnitude is comparable to ? (see section 3.1.)
1
We often consider nested families of functions of the form Fc = {f ? H, ?(f ) ? c}. Then, for each
value of c, function fn is obtained by minimizing the regularized empirical risk En (f ) + ??(f ) for a suitable
choice of the Lagrange coefficient ?. We can then control the estimation-approximation tradeoff by choosing
? instead of c.
2.3
The Approximation?Estimation?Optimization Tradeoff
This decomposition leads to a more complicated compromise. It involves three variables and two
constraints. The constraints are the maximal number of available training example and the maximal
computation time. The variables are the size of the family of functions F, the optimization accuracy
?, and the number of examples n. This is formalized by the following optimization problem.
n ? nmax
min E = Eapp + Eest + Eopt subject to
(3)
T (F, ?, n) ? Tmax
F ,?,n
The number n of training examples is a variable because we could choose to use only a subset of
the available training examples in order to complete the optimization within the alloted time. This
happens often in practice. Table 1 summarizes the typical evolution of the quantities of interest with
the three variables F, n, and ? increase.
Table 1: Typical variations when F, n, and ? increase.
Eapp
Eest
Eopt
T
(approximation error)
(estimation error)
(optimization error)
(computation time)
F
n
?
?
?
???
?
?
???
?
?
?
The solution of the optimization program (3) depends critically of which budget constraint is active:
constraint n < nmax on the number of examples, or constraint T < Tmax on the training time.
? We speak of small-scale learning problem when (3) is constrained by the maximal number
of examples nmax . Since the computing time is not limited, we can reduce the optimization
error Eopt to insignificant levels by choosing ? arbitrarily small. The excess error is then
dominated by the approximation and estimation errors, Eapp and Eest . Taking n = nmax ,
we recover the approximation-estimation tradeoff that is the object of abundant literature.
? We speak of large-scale learning problem when (3) is constrained by the maximal computing time Tmax . Approximate optimization, that is choosing ? > 0, possibly can achieve
better generalization because more training examples can be processed during the allowed
time. The specifics depend on the computational properties of the chosen optimization
algorithm through the expression of the computing time T (F, ?, n).
3
The Asymptotics of Large-scale Learning
In the previous section, we have extended the classical approximation-estimation tradeoff by taking
into account the optimization error. We have given an objective criterion to distiguish small-scale
and large-scale learning problems. In the small-scale case, we recover the classical tradeoff between
approximation and estimation. The large-scale case is substantially different because it involves
the computational complexity of the learning algorithm. In order to clarify the large-scale learning
tradeoff with sufficient generality, this section makes several simplifications:
? We are studying upper bounds of the approximation, estimation, and optimization errors (2). It is often accepted that these upper bounds give a realistic idea of the actual
convergence rates [9, 10, 11, 12]. Another way to find comfort in this approach is to say
that we study guaranteed convergence rates instead of the possibly pathological special
cases.
? We are studying the asymptotic properties of the tradeoff when the problem size increases.
Instead of carefully balancing the three terms, we write E = O(Eapp ) + O(Eest ) + O(Eopt )
and only need to ensure that the three terms decrease with the same asymptotic rate.
? We are considering a fixed family of functions F and therefore avoid taking into account
the approximation error Eapp . This part of the tradeoff covers a wide spectrum of practical
realities such as choosing models and choosing features. In the context of this work, we do
not believe we can meaningfully address this without discussing, for instance, the thorny
issue of feature selection. Instead we focus on the choice of optimization algorithm.
? Finally, in order to keep this paper short, we consider that the family of functions F is
linearly parametrized by a vector w ? Rd . We also assume that x, y and w are bounded,
ensuring that there is a constant B such that 0 ? ?(fw (x), y) ? B and ?(?, y) is Lipschitz.
We first explain how the uniform convergence bounds provide convergence rates that take the optimization error into account. Then we discuss and compare the asymptotic learning properties of
several optimization algorithms.
3.1
Convergence of the Estimation and Optimization Errors
The optimization error Eopt depends directly on the optimization accuracy ?. However, the accuracy
? involves the empirical quantity En (f?n ) ? En (fn ), whereas the optimization error Eopt involves
its expected counterpart E(f?n ) ? E(fn ). This section discusses the impact on the optimization
error Eopt and of the optimization accuracy ? on generalization bounds that leverage the uniform
convergence concepts pioneered by Vapnik and Chervonenkis (e.g., [2].)
In this discussion, we use the letter c to refer to any positive constant. Multiple occurences of the
letter c do not necessarily imply that the constants have identical values.
3.1.1
Simple Uniform Convergence Bounds
Recall that we assume that F is linearly parametrized by w ? Rd . Elementary uniform convergence
results then state that
r
?
?
E sup |E(f ) ? En (f )| ? c
f ?F
d
,
n
where the expectation is taken with respect to the random choice of the training set.2 This result
immediately provides a bound on the estimation error:
Eest
=
?
? `
? `
??
E(fn ) ? En (fn ) + En (fn ) ? En (fF? ) + En (fF? ) ? E(fF? )
r
?
?
d
.
2 E sup |E(f ) ? En (f )| ? c
n
f ?F
E
?`
This same result also provides a combined bound for the estimation and optimization errors:
Eest + Eopt
=
+
?
?
?
?
?
E E(f?n ) ? En (f?n ) + E En (f?n ) ? En (fn )
E [En (fn ) ? En (fF? )] + E [En (fF? ) ? E(fF? )]
r
r
r !
d
d
d
c
+?+0+c
= c ?+
.
n
n
n
Unfortunately, this convergence rate is known to be pessimistic in many important cases. More
sophisticated bounds are required.
3.1.2
Faster Rates in the Realizable Case
When the loss functions ?(?
y , y) is positive, with probability 1 ? e?? for any ? > 0, relative uniform
convergence bounds state that
r
E(f ) ? En (f )
d
n
?
p
sup
log + .
?c
n
d
n
f ?F
E(f )
This result is very useful because it provides faster convergence rates O(log n/n) in the realizable
case, that is when ?(fn (xi ), yi ) = 0 for all training examples (xi , yi ). We have then En (fn ) = 0,
En (f?n ) ? ?, and we can write
r
q
n
d
?
E(f?n ) ? ? ? c E(f?n )
log + .
n
d
n
q
2
Although the original Vapnik-Chervonenkis bounds have the form c nd log nd , the logarithmic term can
be eliminated using the ?chaining? technique (e.g., [10].)
Viewing this as a second degree polynomial inequality in variable
?
?
d
n
?
E(f?n ) ? c ? + log +
.
n
d
n
q
E(f?n ), we obtain
Integrating this inequality using a standard technique (see, e.g., [13]), we obtain a better convergence
rate of the combined estimation and optimization error:
?
?
h
i
h
i
d
n
Eest + Eopt = E E(f?n ) ? E(fF? ) ? E E(f?n ) = c ? + log
.
n
d
3.1.3
Fast Rate Bounds
Many authors (e.g., [10, 4, 12]) obtain fast statistical estimation rates in more general conditions.
These bounds have the general form
?
n
1
d
log
for
? ? ? 1.
(4)
Eapp + Eest ? c Eapp +
n
d
2
This result holds when one can establish the following variance condition:
2? ?1
h
2 i
?f ? F E ?(f (X), Y ) ? ?(fF? (X), Y )
? c E(f ) ? E(fF? )
.
(5)
The convergence rate of (4) is described by the exponent ? which is determined by the quality of
the variance bound (5). Works on fast statistical estimation identify two main ways to establish such
a variance condition.
? Exploiting the strict convexity of certain loss functions [12, theorem 12]. For instance, Lee
et al. [14] establish a O(log n/n) rate using the squared loss ?(?
y , y) = (?
y ? y)2 .
? Making assumptions on the data distribution. In the case of pattern recognition problems,
for instance, the ?Tsybakov condition? indicates how cleanly the posterior distributions
P (y|x) cross near the optimal decision boundary [11, 12]. The realizable case discussed in
section 3.1.2 can be viewed as an extreme case of this.
Despite their much greater complexity, fast rate estimation results can accomodate the optimization
accuracy ? using essentially the methods illustrated in sections 3.1.1 and 3.1.2. We then obtain a
bound of the form
?
h
i
d
n
E = Eapp + Eest + Eopt = E E(f?n ) ? E(f ? ) ? c Eapp +
log
+? .
(6)
n
d
For instance, a general result with ? = 1 is provided by Massart [13, theorem 4.2]. Combining this
result with standard bounds on the complexity of classes of linear functions (e.g., [10]) yields the
following result:
h
i
d
n
?
?
E = Eapp + Eest + Eopt = E E(fn ) ? E(f ) ? c Eapp + log + ? .
(7)
n
d
See also [15, 4] for more bounds taking into account the optimization accuracy.
3.2
Gradient Optimization Algorithms
We now discuss and compare the asymptotic learning properties of four gradient optimization algo?
rithms. Recall that the family of function F is linearly parametrized by w ? Rd . Let wF
and wn
?
correspond to the functions fF and fn defined in section 2.1. In this section, we assume that the
functions w 7? ?(fw (x), y) are convex and twice differentiable with continuous second derivatives.
Convexity ensures that the empirical const function C(w) = En (fw ) has a single minimum.
Two matrices play an important role in the analysis: the Hessian matrix H and the gradient covariance matrix G, both measured at the empirical optimum wn .
2
? ?(fwn (x), y)
?2C
=
E
H =
(w
)
,
(8)
n
n
?w2
?w2
"
? #
??(fwn (x), y)
??(fwn (x), y)
G = En
.
(9)
?w
?w
The relation between these two matrices depends on the chosen loss function. In order to summarize
them, we assume that there are constants ?max ? ?min > 0 and ? > 0 such that, for any ? > 0,
we can choose the number of examples n large enough to ensure that the following assertion is true
with probability greater than 1 ? ? :
tr(G H ?1 ) ? ?
EigenSpectrum(H) ? [ ?min , ?max ]
and
(10)
The condition number ? = ?max /?min is a good indicator of the difficulty of the optimization [16].
The condition ?min > 0 avoids complications with stochastic gradient algorithms. Note that this
condition only implies strict convexity around the optimum. For instance, consider the loss function ? is obtained by smoothing the well known hinge loss ?(z, y) = max{0, 1 ? yz} in a small
neighborhood of its non-differentiable points. Function C(w) is then piecewise linear with smoothed
edges and vertices. It is not strictly convex. However its minimum is likely to be on a smoothed
vertex with a non singular Hessian. When we have strict convexity, the argument of [12, theorem 12]
yields fast estimation rates ? ? 1 in (4) and (6). This is not necessarily the case here.
The four algorithm considered in this paper use information about the gradient of the cost function
to iteratively update their current estimate w(t) of the parameter vector.
? Gradient Descent (GD) iterates
n
w(t + 1) = w(t) ? ?
?C
1X ?
(w(t)) = w(t) ? ?
? fw(t) (xi ), yi
?w
n i=1 ?w
where ? > 0 is a small enough gain. GD is an algorithm with linear convergence [16].
When ? = 1/?max , this algorithm requires O(? log(1/?)) iterations to reach accuracy ?.
The exact number of iterations depends on the choice of the initial parameter vector.
? Second Order Gradient Descent (2GD) iterates
w(t + 1) = w(t) ? H ?1
n
X
1
?
?C
(w(t)) = w(t) ? H ?1
? fw(t) (xi ), yi
?w
n
?w
i=1
where matrix H ?1 is the inverse of the Hessian matrix (8). This is more favorable than
Newton?s algorithm because we do not evaluate the local Hessian at each iteration but
simply assume that we know in advance the Hessian at the optimum. 2GD is a superlinear
optimization algorithm with quadratic convergence [16]. When the cost is quadratic, a
single iteration is sufficient. In the general case, O(log log(1/?)) iterations are required to
reach accuracy ?.
? Stochastic Gradient Descent (SGD) picks a random training example (xt , yt ) at each
iteration and updates the parameter w on the basis of this example only,
w(t + 1) = w(t) ?
? ?
? fw(t) (xt ), yt .
t ?w
Murata [17, section 2.2], characterizes the mean ES [w(t)] and variance VarS [w(t)] with
respect to the distribution implied by the random examples drawn from the training set S at
each iteration. Applying this result to the discrete training set distribution for ? = 1/?min ,
we have ?w(t)2 = O(1/t) where ?w(t) is a shorthand notation for w(t) ? wn .
We can then write
ES [ C(w(t)) ? inf C ]
=
=
?
? `
??
` ?
ES tr H ?w(t) ?w(t)? + o 1t
`
?
` ?
tr H ES [?w(t)] ES [?w(t)]? + H VarS [w(t)] + o 1t
`
?
`
?
2
tr(GH)
+ o 1t ? ??t + o 1t .
t
(11)
Therefore the SGD algorithm reaches accuracy ? after less than ??2/? + o(1/?) iterations
on average. The SGD convergence is essentially limited by the stochastic noise induced
by the random choice of one example at each iteration. Neither the initial value of the
parameter vector w nor the total number of examples n appear in the dominant term of this
bound! When the training set is large, one could reach the desired accuracy ? measured on
the whole training set without even visiting all the training examples. This is in fact a kind
of generalization bound.
Table 2: Asymptotic results for gradient algorithms (with probability 1). Compare the second
last column (time to optimize) with the last column (time to reach the excess test error ?).
Legend: n number of examples; d parameter dimension; ?, ? see equation (10).
Algorithm
Cost of one
iteration
GD
O(nd)
2GD
O d2 + nd
SGD
O(d)
O d2
2SGD
Iterations
Time to reach
Time to reach
to reach ?
accuracy ?
E ? c (Eapp + ?)
2
?
O ? log ?1
O nd? log ?1
O ?d1/?
log2 1?
2
d
O log log ?1 O d2 + nd log log ?1 O ?1/?
log 1? log log 1?
2
2
??2
1
O d??
O d ???
? +o ?
?
2
2
d ?
d ?
?
1
+
o
O
O
?
?
?
?
? Second Order Stochastic Gradient Descent (2SGD) replaces the gain ? by the inverse of
the Hessian matrix H:
w(t + 1) = w(t) ?
1 ?1 ?
H
? fw(t) (xt ), yt .
t
?w
Unlike standard gradient algorithms, using the second order information does not change
the influence of ? on the convergence rate but improves the constants. Using again [17,
theorem 4], accuracy ? is reached after ?/? + o(1/?) iterations.
For each of the four gradient algorithms, the first three columns of table 2 report the time for a single
iteration, the number of iterations needed to reach a predefined accuracy ?, and their product, the
time needed to reach accuracy ?. These asymptotic results are valid with probability 1, since the
probability of their complement is smaller than ? for any ? > 0.
The fourth column bounds the time necessary to reduce the excess error E below
c (E
+?) where c
`
??app
is the constant from (6). This is computed by observing that choosing ? ? nd log nd in (6) achieves
the fastest rate for ?, with minimal computation time. We can then use the asymptotic equivalences
d
? ? ? and n ? ?1/?
log 1? . Setting the fourth column expressions to Tmax and solving for ? yields
the best excess error achieved by each algorithm within the limited time Tmax . This provides the
asymptotic solution of the Estimation?Optimization tradeoff (3) for large scale problems satisfying
our assumptions.
These results clearly show that the generalization performance of large-scale learning systems depends on both the statistical properties of the estimation procedure and the computational properties
of the chosen optimization algorithm. Their combination leads to surprising consequences:
? The SGD and 2SGD results do not depend on the estimation rate ?. When the estimation
rate is poor, there is less need to optimize accurately. That leaves time to process more
examples. A potentially more useful interpretation leverages the fact that (11) is already a
kind of generalization bound: its fast rate trumps the slower rate assumed for the estimation
error.
? Second order algorithms bring little asymptotical improvements in ?. Although the superlinear 2GD algorithm improves the logarithmic term, all four algorithms are dominated by
the polynomial term in (1/?). However, there are important variations in the influence of
the constants d, ? and ?. These constants are very important in practice.
? Stochastic algorithms (SGD, 2SGD) yield the best generalization performance despite being the worst optimization algorithms. This had been described before [18] and observed
in experiments.
In contrast, since the optimization error Eopt of small-scale learning systems can be reduced to
insignificant levels, their generalization performance is solely determined by the statistical properties
of their estimation procedure.
4
Conclusion
Taking in account budget constraints on both the number of examples and the computation time,
we find qualitative differences between the generalization performance of small-scale learning systems and large-scale learning systems. The generalization properties of large-scale learning systems
depend on both the statistical properties of the estimation procedure and the computational properties of the optimization algorithm. We illustrate this fact with some asymptotic results on gradient
algorithms.
Considerable refinements of this framework can be expected. Extending the analysis to regularized risk formulations would make results on the complexity of primal and dual optimization algorithms [19, 20] directly exploitable. The choice of surrogate loss function [7, 12] could also have a
non-trivial impact in the large-scale case.
Acknowledgments Part of this work was funded by NSF grant CCR-0325463.
References
[1] Leslie G. Valiant. A theory of learnable. Proc. of the 1984 STOC, pages 436?445, 1984.
[2] Vladimir N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer Series in Statistics.
Springer-Verlag, Berlin, 1982.
[3] St?ephane Boucheron, Olivier Bousquet, and G?abor Lugosi. Theory of classification: a survey of recent
advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
[4] Peter L. Bartlett and Shahar Mendelson. Empirical minimization. Probability Theory and Related Fields,
135(3):311?334, 2006.
[5] J. Stephen Judd. On the complexity of loading shallow neural networks. Journal of Complexity, 4(3):177?
192, 1988.
[6] Richard O. Duda and Peter E. Hart. Pattern Classification And Scene Analysis. Wiley and Son, 1973.
[7] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32:56?85, 2004.
[8] Clint Scovel and Ingo Steinwart. Fast rates for support vector machines. In Peter Auer and Ron Meir,
editors, Proceedings of the 18th Conference on Learning Theory (COLT 2005), volume 3559 of Lecture
Notes in Computer Science, pages 279?294, Bertinoro, Italy, June 2005. Springer-Verlag.
[9] Vladimir N. Vapnik, Esther Levin, and Yann LeCun. Measuring the VC-dimension of a learning machine.
Neural Computation, 6(5):851?876, 1994.
[10] Olivier Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of
Learning Algorithms. PhD thesis, Ecole Polytechnique, 2002.
[11] Alexandre B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statististics,
32(1), 2004.
[12] Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, March 2006.
[13] Pascal Massart. Some applications of concentration inequalities to statistics. Annales de la Facult?e des
Sciences de Toulouse, series 6, 9(2):245?303, 2000.
[14] Wee S. Lee, Peter L. Bartlett, and Robert C. Williamson. The importance of convexity in learning with
squared loss. IEEE Transactions on Information Theory, 44(5):1974?1980, 1998.
[15] Shahar Mendelson. A few notes on statistical learning theory. In Shahar Mendelson and Alexander J.
Smola, editors, Advanced Lectures in Machine Learning, volume 2600 of Lecture Notes in Computer
Science, pages 1?40. Springer-Verlag, Berlin, 2003.
[16] John E. Dennis, Jr. and Robert B. Schnabel. Numerical Methods For Unconstrained Optimization and
Nonlinear Equations. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1983.
[17] Noboru Murata. A statistical study of on-line learning. In David Saad, editor, Online Learning and Neural
Networks. Cambridge University Press, Cambridge, UK, 1998.
[18] L?eon Bottou and Yann Le Cun. Large scale online learning. In Sebastian Thrun, Lawrence K. Saul,
and Bernhard Sch?olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press,
Cambridge, MA, 2004.
[19] Thorsten Joachims. Training linear SVMs in linear time. In Proceedings of KDD?06, Philadelphia, PA,
USA, August 20-23 2006. ACM.
[20] Don Hush, Patrick Kelly, Clint Scovel, and Ingo Steinwart. QP algorithms with guaranteed accuracy and
run time for support vector machines. Journal of Machine Learning Research, 7:733?769, 2006.
| 3323 |@word polynomial:3 loading:1 duda:1 nd:8 cleanly:1 d2:3 decomposition:4 covariance:1 pick:1 sgd:10 tr:4 carry:1 initial:2 series:2 chervonenkis:2 ecole:1 current:1 scovel:2 surprising:1 john:1 fn:21 realistic:1 numerical:1 kdd:1 update:2 intelligence:1 leaf:1 short:1 provides:4 iterates:2 complication:1 ron:1 org:2 zhang:1 qualitative:1 consists:1 shorthand:1 expected:5 behavior:1 roughly:2 nor:1 brain:1 little:2 actual:1 considering:1 becomes:1 provided:2 underlying:1 bounded:1 notation:1 mass:1 kind:2 minimizes:2 substantially:1 finding:2 nj:1 every:1 classifier:1 uk:1 control:1 grant:1 appear:1 mcauliffe:1 before:2 positive:2 local:1 consequence:1 despite:2 analyzing:1 cliff:1 solely:1 clint:2 lugosi:1 might:1 tmax:5 twice:1 equivalence:1 suggests:1 fastest:1 limited:4 unique:1 practical:1 acknowledgment:1 lecun:1 practice:2 procedure:3 asymptotics:1 empirical:12 integrating:1 nmax:4 superlinear:2 selection:1 prentice:1 risk:12 context:1 applying:1 influence:2 optimize:2 yt:3 urich:1 regardless:1 independently:1 convex:3 survey:1 simplicity:1 formalized:1 immediately:1 occurences:1 variation:2 limiting:1 annals:2 play:1 pioneered:1 speak:2 olivier:4 exact:1 pa:1 expensive:1 recognition:1 satisfying:1 observed:1 role:1 envisions:1 worst:1 ensures:1 decrease:1 convexity:6 complexity:12 sugar:1 depend:3 solving:1 algo:1 compromise:1 basis:1 emulate:1 america:1 various:2 jersey:1 distinct:1 fast:7 describe:1 artificial:1 choosing:7 m4x:1 neighborhood:1 quite:1 whose:1 say:1 toulouse:1 statistic:4 online:2 advantage:1 differentiable:2 maximal:4 product:1 combining:1 achieve:1 olkopf:1 exploiting:1 convergence:18 requirement:1 optimum:3 extending:1 object:1 illustrate:1 measured:3 progress:1 predicted:1 involves:5 come:1 implies:1 switzerland:1 closely:1 correct:1 stochastic:5 vc:1 human:1 viewing:1 trump:1 generalization:10 decompose:1 biological:1 elementary:1 pessimistic:1 strictly:1 clarify:1 hold:1 sufficiently:1 around:1 considered:1 hall:1 great:1 lawrence:1 achieves:1 purpose:1 estimation:31 favorable:1 proc:1 combinatorial:1 reflects:1 minimization:4 mit:1 clearly:2 avoid:1 focus:1 june:1 joachim:1 improvement:1 indicates:1 contrast:1 realizable:3 wf:1 esther:1 unlikely:1 abor:1 relation:1 arg:3 issue:1 dual:1 classification:4 colt:1 exponent:1 pascal:1 proposes:1 constrained:2 special:1 smoothing:1 field:1 eliminated:1 identical:1 represents:2 minf:2 jon:1 discrepancy:1 report:1 ephane:1 develops:1 piecewise:1 richard:1 few:1 pathological:1 bertinoro:1 wee:1 attempt:1 interest:1 englewood:1 mining:1 extreme:1 sens:1 primal:1 predefined:2 edge:1 necessary:2 abundant:1 desired:1 theoretical:1 minimal:1 instance:6 column:5 cover:1 assertion:1 measuring:1 leslie:1 cost:3 vertex:2 subset:1 uniform:5 levin:1 combined:2 gd:7 eopt:14 st:1 explores:1 told:1 lee:2 michael:1 squared:2 again:1 thesis:1 choose:2 possibly:2 cognitive:1 american:1 derivative:1 return:1 account:7 de:3 coefficient:1 inc:1 depends:5 stream:1 root:1 observing:1 sup:3 characterizes:1 competitive:1 recover:2 reached:1 capability:1 complicated:1 aggregation:1 contribution:2 square:1 accuracy:16 variance:4 efficiently:1 murata:2 yield:4 identify:1 correspond:1 accurately:1 critically:1 computerized:1 app:1 explain:1 reach:10 sebastian:1 associated:1 rithms:1 stop:1 gain:2 recall:2 improves:2 carefully:1 sophisticated:1 auer:1 thorny:1 appears:1 alexandre:1 higher:1 formulation:1 generality:1 smola:1 steinwart:2 dennis:1 nonlinear:1 google:1 noboru:1 quality:1 believe:1 usa:2 effect:2 concept:1 true:1 counterpart:1 evolution:1 boucheron:1 laboratory:1 iteratively:1 illustrated:1 during:1 chaining:1 criterion:1 complete:1 polytechnique:1 gh:1 bring:1 qp:1 volume:5 belong:1 discussed:2 interpretation:1 association:1 refer:1 cambridge:3 rd:3 seldom:1 consistency:1 unconstrained:1 had:1 funded:1 patrick:1 dominant:1 posterior:1 recent:1 italy:1 inf:1 certain:2 verlag:3 inequality:4 shahar:3 arbitrarily:1 discussing:1 life:1 yi:6 minimum:2 additional:3 greater:2 stephen:1 multiple:1 reduces:1 faster:2 cross:1 long:1 hart:1 impact:4 prediction:1 involving:1 ensuring:1 essentially:2 expectation:2 iteration:14 achieved:2 proposal:1 whereas:2 singular:1 source:2 sch:1 w2:2 saad:1 unlike:1 probably:1 strict:3 subject:3 massart:2 virtually:1 induced:1 asymptotical:1 meaningfully:1 legend:1 jordan:1 call:1 near:1 leverage:2 enough:2 wn:3 reduce:2 idea:2 regarding:1 tradeoff:17 six:1 expression:2 bartlett:3 peter:5 hessian:6 useful:2 maybe:1 amount:2 tsybakov:2 extensively:1 processed:1 svms:1 reduced:1 meir:1 nsf:1 ccr:1 write:3 discrete:1 four:4 drawn:2 neither:1 annales:1 run:1 inverse:4 letter:2 fourth:2 family:9 yann:2 decision:1 summarizes:1 comparable:1 bound:21 guaranteed:2 simplification:1 quadratic:2 replaces:1 constraint:6 scene:1 bousquet:4 generates:1 aspect:1 dominated:2 argument:1 min:7 leon:1 combination:1 poor:1 march:1 jr:1 smaller:2 son:1 shallow:1 cun:1 making:1 happens:1 thorsten:1 taken:3 computationally:1 equation:2 zurich:1 discus:3 needed:2 mind:1 know:1 studying:2 available:3 operation:1 endowed:1 slower:1 original:1 ensure:2 log2:1 hinge:1 newton:1 const:1 eon:2 restrictive:1 yz:1 establish:3 society:1 classical:2 implied:1 objective:2 already:2 quantity:2 concentration:2 dependence:1 usual:1 surrogate:1 visiting:1 gradient:13 dp:1 berlin:2 capacity:1 parametrized:3 thrun:1 eigenspectrum:1 trivial:2 relationship:1 minimizing:3 vladimir:2 setup:1 unfortunately:1 robert:2 potentially:1 stoc:1 unknown:2 perform:1 upper:2 observation:1 benchmark:1 ingo:2 descent:4 extended:1 smoothed:2 august:1 david:1 complement:1 pair:1 required:2 hush:1 address:1 eest:13 below:1 pattern:2 comfort:1 summarize:1 program:1 max:5 power:2 suitable:1 difficulty:1 regularized:2 indicator:1 advanced:1 esaim:1 imply:1 philadelphia:1 literature:2 kelly:1 asymptotic:10 relative:1 loss:9 lecture:3 prototypical:1 proportional:1 var:2 degree:1 sufficient:3 principle:1 editor:4 balancing:1 surprisingly:1 last:2 side:1 wide:1 saul:1 taking:5 tolerance:1 boundary:1 dimension:2 judd:1 valid:1 avoids:1 author:1 qualitatively:1 made:1 refinement:1 transaction:1 excess:7 approximate:8 bernhard:1 keep:1 active:1 assumed:2 xi:6 spectrum:1 don:1 continuous:2 iterative:1 facult:1 table:4 reality:1 learn:1 alloted:1 bottou:3 complex:1 necessarily:3 williamson:1 main:1 linearly:5 motivation:1 noise:1 whole:1 allowed:1 eapp:15 exploitable:1 en:28 ff:16 wiley:1 tong:1 candidate:1 perceptual:1 theorem:4 specific:1 xt:3 learnable:2 insignificant:2 exists:2 mendelson:3 vapnik:4 valiant:2 importance:1 phd:1 magnitude:1 nec:1 accomodate:1 budget:2 logarithmic:2 fc:1 simply:1 likely:1 lagrange:1 approximatively:1 springer:4 nested:1 acm:1 ma:1 conditional:1 viewed:1 lipschitz:1 considerable:1 fw:7 change:1 determined:3 typical:2 total:3 accepted:1 e:5 la:1 support:3 schnabel:1 alexander:1 evaluate:1 princeton:1 d1:1 |
2,563 | 3,324 | Augmented Functional Time Series Representation
and Forecasting with Gaussian Processes
Nicolas Chapados and Yoshua Bengio
Department of Computer Science and Operations Research
University of Montr?eal
Montr?eal, Qu?ebec, Canada H3C 3J7
{chapados,bengioy}@iro.umontreal.ca
Abstract
We introduce a functional representation of time series which allows forecasts to
be performed over an unspecified horizon with progressively-revealed information sets. By virtue of using Gaussian processes, a complete covariance matrix
between forecasts at several time-steps is available. This information is put to use
in an application to actively trade price spreads between commodity futures contracts. The approach delivers impressive out-of-sample risk-adjusted returns after
transaction costs on a portfolio of 30 spreads.
1
Introduction
Classical time-series forecasting models, such as ARMA models [6], assume that forecasting is
performed at a fixed horizon, which is implicit in the model. An overlaying deterministic time trend
may be fit to the data, but is generally of fixed and relatively simple functional form (e.g. linear,
quadratic, or sinusoidal for periodic data). To forecast beyond the fixed horizon, it is necessary
to iterate forecasts in a multi-step fashion. These models are good at representing the short-term
dynamics of the time series, but degrade rapidly when longer-term forecasts must be made, usually
quickly converging to the unconditional expectation of the process after removal of the deterministic
time trend. This is a major issue in applications that require a forecast over a complete future
trajectory, and not a single (or restricted) horizon. These models are also constrained to deal with
regularly-sampled data, and make it difficult to condition the time trend on explanatory variables,
especially when iteration of short-term forecasts has to be performed. To a large extent, the same
problems are present with non-linear generalizations of such models, such as time-delay or recurrent
neural networks [1], which simply allow the short-term dynamics to become nonlinear but leave
open the question of forecasting complete future trajectories.
Functional Data Analysis (FDA) [10] has been proposed in the statistical literature as an answer
to some of these concerns. The central idea is to consider a whole curve as an example (specified
by a finite number of samples ht, yt i), which can be represented by coefficients in a non-parametric
basis expansion such as splines. This implies learning about complete trajectories as a function
of time, hence the ?functional? designation. Since time is viewed as an independent variable, the
approach can forecast at arbitrary horizons and handle irregularly-sampled data. Typically, FDA is
used without explanatory time-dependent variables, which are important for the kind of applications
we shall be considering. Furthermore, the question remains of how to integrate a progressivelyrevealed information set in order to make increasingly more precise forecasts of the same future
trajectory. To incorporate conditioning information, we consider here the output of a prediction to
be a whole forecasting curve (as a function of t).
The motivation for this work comes from forecasting and actively trading price spreads between
commodity futures contracts (see, e.g., [7], for an introduction). Since futures contracts expire and
have a finite duration, this problem is characterized by the presence of a large number of separate
1
historical time series, which all can be of relevance in forecasting a new time series. For example,
we expect seasonalities to affect similarly all the series. Furthermore, conditioning information, in
the form of macroeconomic variables, can be of importance, but exhibit the cumbersome property
of being released periodically, with explanatory power that varies across the forecasting horizon. In
other words, when making a very long-horizon forecast, the model should not incorporate conditioning information in the same way as when making a short- or medium-term forecast. A possible
solution to this problem is to have multiple models for forecasting each time series, one for each
time scale. However, this is hard to work with, requires a high degree of skill on the part of the
modeler, and is not amenable to robust automation when one wants to process hundreds of time
series. In addition, in order to measure risk associated with a particular trade (buying at time t and
selling at time t0 ), we need to estimate the covariance of the price predictions associated with these
two points in the trajectory.
These considerations motivate the use of Gaussian processes, which naturally provide a covariance
matrix between forecasts made at several points. To tackle the challenging task of forecasting and
trading spreads between commodity futures, we introduce here a form of functional data analysis
in which the function to be forecast is indexed both by the date of availability of the information
set and by the forecast horizon. The predicted trajectory is thus represented as a functional object
associated with a distribution, a Gaussian process, from which the risk of different trading decisions
can readily be estimated. This approach allows incorporating input variables that cannot be assumed
to remain constant over the forecast horizon, like statistics of the short-term dynamics.
Previous Work
Gaussian processes for time-series forecasting have been considered before.
Multi-step forecasts are explicitly tackled by [4], wherein uncertainty about the intermediate values
is formally incorporated into the predictive distribution to obtain more realistic uncertainty bounds
at longer horizons. However, this approach, while well-suited to purely autoregressive processes,
does not appear amenable to the explicit handling of exogenous input variables. Furthermore, it
suffers from the restriction of only dealing with regularly-sampled data. Our approach is inspired
by the CO2 model of [11] as an example of application-specific covariance function engineering.
2
The Model
We consider a set of N real time series each of length Mi , {yti }, i = 1, . . . , N and t = 1, . . . , Mi .
In our application each i represents a different year, and the series is the sequence of commodity
spread prices during the period where it is traded. The lengths of all series are not necessarily
identical, but we shall assume that the time periods spanned by the series are ?comparable? (e.g.
the same range of days within a year if the series follow an annual cycle) so that knowledge from
past series can be transferred to a new one to be forecast. The forecasting problem is that given
observations from the complete series i = 1, . . . , N ? 1 and from a partial last series, {ytN }, t =
1, . . . , MN , we want to extrapolate the last series until a predetermined endpoint, i.e. characterize
the joint distribution of {y?N }, ? = MN + 1, . . . , MN + H. We are also given a set of non-stochastic
explanatory variables specific to each series, {xit }, where xit ? Rd . Our objective is to find an
effective representation of P ({y?N }? =MN +1,...,MN +H | {xit , yti }i=1,...,N
t=1,...,Mi ), with ?, i and t ranging,
respectively over the forecasting horizon, the available series and the observations within a series.
Gaussian Processes
Assuming that we are willing to accept a normally-distributed posterior,
Gaussian processes [8, 11, 14] have proved a general and flexible tool for nonlinear regression in
a Bayesian framework. Given a training set of M input?output pairs hX ? RM ?d , y ? RM i,
0
a set of M 0 test point locations X? ? RM ?d and a positive semi-definite covariance function
k : Rd ? Rd 7? R, the joint posterior distribution of the test outputs y? follows a normal with mean
and covariance given by
E [y? | X, X? , y]
Cov [y? | X, X? , y]
= K(X? , X)??1 y,
(1)
= K(X? , X? ) ? K(X? , X)??1 K(X, X? ),
(2)
where we have set ? = K(X, X) + ?n2 IM , with K the matrix of covariance evaluations,
4
K(U, V)i,j = k(Ui , Vj ), and ?n2 the assumed process noise level. The specific form of the covariance function used in our application is described below, after introducing the representation used
for forecasting.
2
Functional Representation for Forecasting In the spirit of functional data analysis, a first attempt
at solving the forecasting problem is to set it forth in terms of regression from the input variables to
the series values, adding to the inputs an explicit time index t and series identity i,
i
h
0
0
E yti Iti0 ] = f (i, t, xit|t0 )
Cov yti , yti0 Iti0 = g(i, t, xit|t0 , i0 , t0 , xit0 |t0 ),
(3)
these expressions being conditioned on the information set Iti0 containing information up to time
t0 of series i (we assume that all prior series i0 < i are also included in their entirety in Iti0 ). The
notation xit|t0 denotes a forecast of xit given information available at t0 . Functions f and g result
from Gaussian process training, eq. (1) and (2), using information in Iti0 . To extrapolate over the
unknown horizon, one simply evaluates f and g with the series identity index i set to N and the time
index t within a series ranging over the elements of ? (forecasting period). Owing to the smoothness
properties of an adequate covariance function, one can expect the last time series (whose starting
portion is present in the training data) to be smoothly extended, with the Gaussian process borrowing
from prior series, i < N , to guide the extrapolation as the time index reaches far enough beyond the
available data in the last series.
The principal difficulty with this method resides in handling the exogenous inputs xN
t|t0 over the
N
forecasting period: the realizations of these variables, xt , are not usually known at the time the
forecast is made and must be extrapolated with some reasonableness. For slow-moving variables
that represent a ?level? (as opposed to a ?difference? or a ?return?), one can conceivably keep their
value constant to the last known realization across the forecasting period. However, this solution
is restrictive, problem-dependent, and precludes the incorporation of short-term dynamics variables
(e.g. the first differences over the last few time-steps) if desired.
Augmenting the Functional Representation We propose in this paper to augment the functional
representation with an additional input variable that expresses the time at which the forecast is being
made, in addition to the time for which the forecast is made. We shall denote the former the operation
time and the latter the target time. The distinction is as follows: operation time represents the time
at which the other input variables are observed and the time at which, conceptually, a forecast of
the entire future trajectory is performed. In contrast, target time represents time at a point of the
predicted target series (beyond operation time), given the information known at the operation time.
As previously, the time series index i remains part of the inputs. In this framework, forecasting is
performed by holding the time series index constant to N , the operation time constant to the time
MN of the last observation, the other input variables constant to their last-observed values xN
MN , and
varying the target time over the forecasting period ? . Since we are not attempting to extrapolate the
inputs beyond their intended range of validity, this approach admits general input variables, without
restriction as to their type, and whether they themselves can be forecast.
It can be convenient to represent the target time as a positive delta ? from the operation time t0 . In
contrast to eq. (3), this yields the representation
i
h
0
0
E yti0 +? Iti0 ] = f (i, t0 , ?, xit0 )
Cov yti0 +? , yti00 +?0 Iti0 = g(i, t0 , ?, xit0 , i0 , t00 , ?0 , xit00 ),
(4)
where we have assumed the operation time to coincide with the end of the information set. Note
that this augmentation allows to dispense with the problematic extrapolation xit|t0 of the inputs,
instead allowing a direct use of the last available values xit0 . Moreover, from a given information
set, nothing precludes forecasting the same trajectory from several operation times t0 < t0 , which
can be used as a means of evaluating the stability of the obtained forecast.
The obvious downside to augmentation lies in the greater computational cost it entails. In particular,
the training set must contain sufficient information to represent the output variable for many combinations of operation and target times that can be provided as input. In the worst case, this implies
that the number of training examples grows quadratically with the length of the training time series.
In practice, a downsampling scheme is used wherein only a fixed number of target-time points is
sampled for every operation-time point.1
1
This number was 15 in our experiments, and these were not regularly spaced, with longer horizons spaced
farther apart. Furthermore, the original daily frequency of the data was reduced to keep approximately one
operation-time point per week.
3
Covariance Function
We used a modified form of the rational quadratic covariance function
with hyperparameters for automatic relevance determination [11], which is expressed as
!??
d
1 X (uk ? vk )2
2
2
kAUG-RQ (u, v; `, ?, ?f , ?TS ) = ?f 1 +
(5)
+ ?TS
?iu ,iv ,
2?
`2k
k=1
4
where ?j,k = I[j = k] is the Kronecker delta. The variables u and v are values in the augmented
representation introduced previously, containing the three variables representing time (current timeseries index or year, operation time, target time) as well as the additional explanatory variables. The
notation iu denotes the time-series index component i of input variable u. The last term of the covariance function, the Kronecker delta, is used to induce an increased similarity among points that
belong to the same time series (e.g. the same spread trading year). By allowing a series-specific
average level to be maintained into the extrapolated portion, the presence of this term was found
to bring better forecasting performance. The hyperparameters `i , ?, ?f , ?TS , ?n are found by maximizing the marginal likelihood on the training set by a standard conjugate gradient optimization
[11]. For tractability, we rely on a two-stage training procedure, wherein hyperparameter optimization is performed on a fairly small training set (M = 500) and final training is done on a larger set
(M = 2250), keeping hyperparameters fixed.
3
Evaluating Forecasting Performance
To establish the benefits of the proposed functional representation for forecasting commodity spread
prices, we compared it against other likely models on three common grain and grain-related
spreads:2 the January?July Soybeans, May?September Soybean Meal, and March?July Chicago
Hard Red Wheat. The forecasting task is to predict the complete future trajectory of each spread
(taken individually), from 200 days before maturity until maturity.
Methodology Realized prices in the previous trading years are provided from 250 days to maturity,
using data going back to 1989. The first test year is 1994. Within a given trading year, the time
variables represent the number of calendar days to maturity of the near leg; since no data is observed
on week-ends, training examples are sampled on an irregular time scale. Performance evaluation
proceeds through a sequential validation procedure [2]: within a trading year, we first train models
200 days before maturity and obtain a first forecast for the future price trajectory. We then retrain
models every 25 days, and obtain revised portions of the remainder of the trajectory. Proceeding
sequentially, this operation is repeated for succeeding trading years. All forecasts are compared
amongst models on squared-error and negative log-likelihood criteria (see ?assessing significance?,
below). Input variables are subject to minimal preprocessing: we standardize them to zero mean
and unit standard deviation. The price targets require additional treatment: since the price level of
a spread can vary significantly from year to year, we normalize the price trajectories to start at zero
at the start of every trading year, by subtracting the first price. Furthermore, in order to get slightly
better behaved optimization, we divide the price targets by their overall standard deviation.
Models Compared
The ?complete? model to be compared against others is based on the
augmented-input representation Gaussian process with the modified rational quadratic covariance
function eq. (5). In addition to the three variables required for the representation of time, the following inputs were provided to the model: (i) the current spread price and the price of the three
nearest futures contracts on the underlying commodity term structure, (ii) economic variables (the
stock-to-use ratio and year-over-year difference in total ending stocks) provided on the underlying
commodity by the U.S. Department of Agriculture [13]. This model is denoted AugRQ/all-inp. An
example of the sequence of forecasts made by this model, repeated every 25 times steps, is shown
in the upper panel of Figure 1.
To determine the value added by each type of input variable, we include in the comparison two
models based on exactly on the same architecture, but providing less inputs: AugRQ/less-inp does
2
Our convention is to first give the short leg of the spread, followed by the long leg. Hence, Soybeans 1?7
should be interpreted as taking a short position (i.e. selling) in the January Soybeans contract and taking an
offsetting long (i.e. buying) in the July contract. Traditionally, intra-commodity spread positions are taken so
as to match the number of contracts on both legs ? the number of short contracts equals the number of long
ones ? not the dollar value of the long and short sides.
4
Figure 1: Top Panel: Illustration of multiple forecasts, repeated every 25 days, of the 1996 March?July Wheat
spread (dashed lines); realized price is in gray. Although the first forecast (smooth solid blue, with confidence
bands) mistakes the overall price level, it approximately correctly identifies local price maxima and minima,
which is sufficient for trading purposes. Bottom Panel: Position taken by the trading model (in red: short, then
neutral, then long), and cumulative profit of that trade (gray).
not include the economic variables. AugRQ/no-inp further removes the price inputs, leaving only
the time-representation inputs. Moreover, to quantify the performance gain of the augmented representation of time, the model StdRQ/no-inp implements a ?standard time representation? that would
likely be used in a functional data analysis model; as described in eq. (3), this uses a single time
variable instead of splitting the representation of time between the operation and target times.
Finally, we compare against simpler models: Linear/all-inp uses a dot-product covariance function
to implement Bayesian linear regression, using the full set of input variables described above. And
AR(1) is a simple linear autoregressive model. The predictive mean and covariance matrix for this
last model are established as follows (see, e.g. [6]). We consider the scalar data generating process
iid
?t ? N (0, ? 2 ),
yt = ? yt?1 + ?t ,
(6)
where the process {yt } has an unconditional mean of zero.3 Given information available at time t,
It , the h-step ahead forecast from time t under this model, has conditional expectation and covariance (with the h0 -step ahead forecast), expressed as
0
E [yt+h | It ] = ?h yt ,
?2 min(h,h )
0 1 ? ?
.
Cov yt+h|t , yt+h0 |t | It = ? 2 ?h+h
?2 ? 1
Assessing Significance of Forecasting Performance Differences For each trajectory forecast, we
measure the squared error (SE) made at each time-step along with the negative log-likelihood (NLL)
of the realized price under the predictive distribution. To account for differences in target variable
distribution throughout the years, we normalize the SE by dividing it by the standard deviation of
the test targets in a given year. Similarly, we normalize the NLL by subtracting the likelihood of a
univariate Gaussian distribution estimated on the test targets of the year.
Due to the serial correlation it exhibits, the time series of performance differences (either SE or
NLL) between two models cannot directly be subjected to a standard t-test of the null hypothesis of
no difference in forecasting performance. The well-known Diebold-Mariano test [3] corrects for this
correlation structure in the case where a single time series of performance differences is available.
This test is usually expressed as follows.
PLet {dt } be the sequence of error differences between two
1
?
models to be compared. Let d? = M
t dt be the mean difference. The sample variance of d is
readily shown [3] to be
K
X
4
?= 1
v?DM = Var[d]
??k ,
M
k=?K
3
In our experiments, we estimate an independent empirical mean for each trading year, which is subtracted
from the prices before proceeding with the analysis.
5
Table 1: Forecast performance difference between AugRQ/all-inp and all other models, for the three spreads
studied. For both the Squared Error and NLL criteria, the value of the cross-correlation-corrected statistic is listed (CCC) along with its p-value under the null hypothesis. A negative CCC statistic indicates that
AugRQ/all-inp beats the other model on average.
Soybeans 1?7
Sq. Error
NLL
CCC
p
CCC
p
AugRQ/less-inp ?0.86 0.39 ?0.89 0.37
AugRQ/no-inp ?1.68 0.09 ?1.73 0.08
Linear/all-inp
?1.53 0.13 ?1.33 0.18
?4.24 10?5 ?0.44 0.66
AR(1)
?2.44 0.01 ?1.04 0.30
StdRQ/no-inp
Soybean Meal 5?9
Sq. Error
NLL
CCC
p
CCC
p
?1.05 0.29 ?0.95 0.34
?1.78 0.08 ?2.42 0.02
?1.61 0.11 ?2.00 0.05
?2.53 0.01
0.12 0.90
?2.69 0.01 ?1.08 0.28
Wheat 3?7
Sq. Error
NLL
CCC
p
CCC
p
?0.05 0.96
1.06 0.29
?2.75 0.01 ?2.42 0.02
?4.20 10?4 ?3.45 10?3
?6.50 0.00 ?6.07 10?9
?2.67 0.01 ?9.36 0.00
where M is the sequence length and ??k is an estimator of the lag-k autocovariance of the dt s. The
maximum lag order?K is a parameter of the test and must be determined empirically. Then the
? v?DM is asymptotically distributed as N (0, 1) and a classical test of the null
statistic DM = d/
hypothesis d? = 0 can be performed.
Unfortunately, even the Diebold-Mariano correction for autocorrelation is not sufficient to compare
models in the present case. Due to the repeated forecasts made for the same time-step across several
iterations of sequential validation, the error sequences are likely to be cross-correlated since they
result from models estimated on strongly overlapping training sets. This suggests that an additional
correction should be applied to account for this cross-correlation across test sets, expressed as
?
?
K
K0
X
X
X
X
X
1
Mi
??ki +
Mi ? j
??ki,j ? ,
v?CCC?DM = 2 ?
(7)
M
i j6=i
i
k=?K 0
k=?K
P
where Mi is the number of examples in test set i, M = i Mi is the total number of examples,
Mi ? j is the number of time-steps where test sets i and j overlap, ??ki denote the estimated lag-k
autocovariances within test set i, and ??ki,j denote the estimated lag-k cross-covariances between test
sets i and j. The maximum lag order for cross-covariances, K 0 , is possibly different from K (our
experiments used K = K 0 = 15). This revised variance estimator was used in place of the usual
Diebold-Mariano statistic in the results presented below.
Results Results of the forecasting performance difference between AugRQ/all-inp and all other
models is shown in Table 1. We observe that AugRQ/all-inp generally beats the others on both the
SE and NLL criteria, often statistically significantly so. In particular, the augmented representation
of time is shown to be of value (i.e. comparing against StdRQ/no-inp). Moreover, the Gaussian
process is capable of making good use of the additional price and economic input variables, although
not always with the traditionally accepted levels of significance.
4
Application: Trading a Portfolio of Spreads
We applied this forecasting methodology based on an augmented representation of time to trading a
portfolio of spreads. Within a given trading year, we apply an information-ratio criterion to greedily
determine the best trade into which to enter, based on the entire price forecast (until the end of the
year) produced by the Gaussian process. More specifically, let {pt } be the future prices forecast by
the model at some operation time (presumably the time of last available element in the training set).
The expected forecast dollar profit of buying at t1 and selling at t2 is simply given by pt2 ? pt1 . Of
course, a prudent investor would take trade risk into consideration. A simple approximation of risk
is given by the trade profit volatility. This yields the forecast information ratio4 of the trade
c 1 , t2 ) = pE[pt2 ? pt1 |It0 ] ,
(8)
IR(t
Var[pt2 ? pt1 |It0 ]
4
An information ratio is defined as the average return of a portfolio in excess of a benchmark, divided by
the standard deviation of the excess return distribution; see [5] for more details.
6
Table 2: Financial performance statistics for
the 30-spread portfolio on the 1994?2007 (until
April 30) period, and two disjoint sub-periods.
All returns are expressed in excess of the riskfree rate. The information ratio statistics are annualized. Skewness and excess kurtosis are on
the monthly return distributions. Drawdown duration is expressed in calendar days. The model
displays good performance for moderate risk.
Figure 2: After a price trajectory forecast (in the top
and left portions of the figure), all possible pairs of buyday/sell-day are evaluated on a trade information ratio criterion, whose results are shown by the level plot.
The best trade is selected, here shorting 235 days before maturity with forecast price at a local maximum,
and covering 100 days later at a local minimum.
Avg Annual Return
Avg Annual Stddev
Information Ratio
Skewness
Excess Kurtosis
Best Month
Worst Month
Percent Months Up
Max. Drawdown
Drawdown Duration
Drawdown From
Drawdown Until
Full
Period
7.3%
4.1%
1.77
0.68
3.40
6.0%
?3.4%
71%
?7.7%
653
1997/02
1998/11
1994/01
2002/12
5.9%
4.0%
1.45
0.65
4.60
6.0%
?3.4%
67%
?7.7%
653
1997/02
1998/11
2003/01
2007/04
10.1%
4.1%
2.44
0.76
1.26
4.8%
?1.8%
77%
?4.0%
23
2004/06
2004/07
where Var[pt2 ? pt1 |It0 ] can be computed as Var[pt1 |It0 ] + Var[pt1 |It0 ] ? 2 Cov[pt1 , pt2 |It0 ],
each quantity being separately obtainable from the Gaussian process forecast, cf. eq. (2). The trade
decision is made in one of two ways, depending on whether a position has already been opened: (i)
When making a decision at time t0 , if a position has not yet been entered for the spread in a given
trading year, eq. (8) is maximized with respect to unconstrained t1 , t2 ? t0 . An illustration of this
criterion is given in Figure 2, which corresponds to the first decision made when trading the spread
shown in Figure 1. (ii) In contrast, if a position has already been opened, eq. (8) is only maximized
with respect to t2 , keeping t1 fixed at t0 . This corresponds to revising the exit point of an existing
position. Simple additional filters are used to avoid entering marginal trades: we impose a trade
duration of at least four days, a minimum forecast IR of 0.25 and a forecast standard deviation of
the price sequence of at least 0.075. These thresholds have not been tuned extensively; they were
used only to avoid trading on an approximately flat price forecast.
We applied these ideas to trading a portfolio of 30 spreads, selected among the following commodities: Cotton (2 spreads), Feeder Cattle (2), Gasoline (1), Lean Hogs (7), Live Cattle (1), Natural
Gas (2), Soybean Meal (5), Soybeans (5), Wheat (5). The spreads were selected on the basis of
their good performance on the 1994?2002 period. Our simulations were carried on the 1994?2007
period, using historical data (for Gaussian process training) dating back to 1989. Transaction costs
were assumed to be 5 basis points per spread leg traded. Spreads were never traded later than 25
calendar days before maturity of the near leg. Relative returns are computed using as a notional
amount half the total exposure incurred by both legs of the spread.5 Financial performance results
on the complete test period and two disjoint sub-periods (which correspond, until end-2002 to the
model selection period, and after 2003 to a true out-of-sample evaluation) are shown in Table 2. In
all sub-periods, but particularly since 2003, the portfolio exhibits a very favorable risk-return profile,
including positive skewness and acceptable excess kurtosis.6 A plot of cumulative returns, number
of open positions and monthly returns appears in Figure 3.
5
This is a conservative assumption, since most exchanges impose considerably reduced margin requirements
on recognized spreads.
6
By way of comparison, over the period 1 Jan. 1994?30 Apr. 2007, the S&P 500 index has an information
ratio of approximately 0.37 against the U.S. three-month treasury bills.
7
Figure 3: Top Panel: cumulative excess return after transaction costs of a portfolio of 30 spreads traded
according to the maximum information-ratio criterion; the bottom part plots the number of positions open at a
time (right axis). Bottom Panel: monthly portfolio relative excess returns; we observe the significant positive
skewness in the distribution.
5
Future Work and Conclusions
We introduced a flexible functional representation of time series, capable of making long-term forecasts from progressively-revealed information sets and of handling multiple irregularly-sampled series as training examples. We demonstrated the approach on a challenging commodity spread trading
application, making use of a Gaussian process? ability to compute a complete covariance matrix between several test outputs. Future work includes making more systematic use of approximation
methods for Gaussian processes (see [9] for a survey). The specific usage pattern of the Gaussian
process may guide the approximation: in particular, since we know in advance the test inputs, the
problem is intrinsically one of transduction, and the Bayesian Committee Machine [12] could prove
beneficial.
References
[1] C. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
[2] N. Chapados and Y. Bengio. Cost functions and model combination for VaR-based asset allocation using
neural networks. IEEE Transactions on Neural Networks, 12(4):890?906, July 2001.
[3] F. X. Diebold and R. S. Mariano. Comparing predictive accuracy. Journal of Business & Economic
Statistics, 13(3):253?263, July 1995.
[4] A. Girard, C. E. Rasmussen, J. Q. Candela, and R. Murray-Smith. Gaussian process priors with uncertain
inputs ? application to multiple-step ahead time series forecasting. In S. T. S. Becker and K. Obermayer,
editors, Advances in Neural Information Processing Systems 15, pages 529?536. MIT Press, 2003.
[5] R. C. Grinold and R. N. Kahn. Active Portfolio Management. McGraw Hill, 1999.
[6] J. D. Hamilton. Time Series Analysis. Princeton University Press, 1994.
[7] J. C. Hull. Options, Futures and Other Derivatives. Prentice Hall, Englewood Cliffs, NJ, sixth edition,
2005.
[8] A. O?Hagan. Curve fitting and optimal design for prediction. Journal of the Royal Statistical Society B,
40:1?42, 1978. (With discussion).
[9] J. Quionero-Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process
regression. Journal of Machine Learning Research, 6:1939?1959, 2005.
[10] J. O. Ramsay and B. W. Silverman. Functional Data Analysis. Springer, second edition, 2005.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[12] V. Tresp. A bayesian committee machine. Neural Computation, 12:2719?2741, 2000.
[13] U.S. Department of Agriculture. Economic research service data sets. WWW publication. Available at
http://www.ers.usda.gov/Data/.
[14] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In D. S. Touretzky, M. C.
Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 514?
520. MIT Press, 1996.
8
| 3324 |@word open:3 willing:1 simulation:1 covariance:19 profit:3 solid:1 ytn:1 series:44 tuned:1 past:1 existing:1 current:2 comparing:2 yet:1 must:4 readily:2 grain:2 periodically:1 realistic:1 chicago:1 predetermined:1 remove:1 plot:3 succeeding:1 progressively:2 half:1 selected:3 smith:1 short:11 farther:1 seasonalities:1 location:1 simpler:1 xit0:4 along:2 direct:1 become:1 maturity:7 prove:1 fitting:1 autocorrelation:1 introduce:2 expected:1 themselves:1 multi:2 buying:3 inspired:1 gov:1 considering:1 provided:4 notation:2 moreover:3 underlying:2 medium:1 panel:5 null:3 kind:1 unspecified:1 interpreted:1 skewness:4 revising:1 nj:1 commodity:10 every:5 tackle:1 ebec:1 exactly:1 rm:3 uk:1 normally:1 unit:1 appear:1 hamilton:1 overlaying:1 before:6 positive:4 engineering:1 local:3 t1:3 service:1 mistake:1 oxford:1 cliff:1 approximately:4 annualized:1 studied:1 suggests:1 challenging:2 range:2 statistically:1 offsetting:1 practice:1 definite:1 implement:2 silverman:1 sq:3 procedure:2 jan:1 empirical:1 significantly:2 convenient:1 word:1 induce:1 inp:14 confidence:1 get:1 cannot:2 selection:1 put:1 risk:7 live:1 prentice:1 restriction:2 bill:1 deterministic:2 demonstrated:1 yt:8 maximizing:1 www:2 exposure:1 williams:2 starting:1 duration:4 survey:1 splitting:1 estimator:2 spanned:1 financial:2 stability:1 handle:1 traditionally:2 target:14 pt:1 us:2 hypothesis:3 trend:3 element:2 standardize:1 particularly:1 recognition:1 hagan:1 lean:1 observed:3 bottom:3 worst:2 wheat:4 cycle:1 trade:12 rq:1 mozer:1 ui:1 dispense:1 co2:1 dynamic:4 motivate:1 solving:1 predictive:4 purely:1 exit:1 basis:3 selling:3 joint:2 stock:2 k0:1 represented:2 train:1 effective:1 h0:2 whose:2 lag:5 larger:1 pt1:7 precludes:2 calendar:3 ability:1 statistic:8 cov:5 t00:1 h3c:1 final:1 nll:8 sequence:6 kurtosis:3 propose:1 subtracting:2 product:1 remainder:1 realization:2 rapidly:1 date:1 entered:1 forth:1 normalize:3 requirement:1 assessing:2 generating:1 leave:1 object:1 volatility:1 depending:1 recurrent:1 augmenting:1 nearest:1 eq:7 dividing:1 predicted:2 entirety:1 implies:2 come:1 trading:20 convention:1 quantify:1 owing:1 filter:1 stochastic:1 opened:2 hull:1 require:2 exchange:1 hx:1 generalization:1 im:1 adjusted:1 correction:2 considered:1 hall:1 normal:1 presumably:1 week:2 traded:4 predict:1 major:1 vary:1 released:1 agriculture:2 purpose:1 favorable:1 individually:1 hasselmo:1 tool:1 mit:3 j7:1 gaussian:22 always:1 modified:2 avoid:2 varying:1 publication:1 xit:8 vk:1 likelihood:4 indicates:1 contrast:3 greedily:1 dollar:2 dependent:2 i0:3 typically:1 entire:2 explanatory:5 accept:1 borrowing:1 kahn:1 going:1 iu:2 issue:1 among:2 flexible:2 overall:2 augment:1 denoted:1 prudent:1 constrained:1 fairly:1 marginal:2 equal:1 never:1 identical:1 represents:3 sell:1 future:15 yoshua:1 spline:1 others:2 t2:4 few:1 intended:1 attempt:1 montr:2 englewood:1 intra:1 evaluation:3 unconditional:2 amenable:2 capable:2 partial:1 necessary:1 daily:1 autocovariance:1 indexed:1 iv:1 divide:1 arma:1 desired:1 minimal:1 uncertain:1 increased:1 eal:2 downside:1 ar:2 diebold:4 cost:5 introducing:1 deviation:5 neutral:1 tractability:1 hundred:1 delay:1 characterize:1 answer:1 varies:1 periodic:1 considerably:1 contract:8 systematic:1 corrects:1 quickly:1 augmentation:2 central:1 squared:3 management:1 containing:2 opposed:1 possibly:1 soybean:8 derivative:1 return:13 actively:2 account:2 sinusoidal:1 automation:1 coefficient:1 availability:1 includes:1 explicitly:1 performed:7 later:2 extrapolation:2 view:1 exogenous:2 candela:2 portion:4 red:2 start:2 investor:1 option:1 ir:2 accuracy:1 variance:2 chapados:3 maximized:2 yield:2 spaced:2 correspond:1 conceptually:1 bayesian:4 produced:1 iid:1 trajectory:14 asset:1 j6:1 reach:1 cumbersome:1 suffers:1 touretzky:1 sixth:1 evaluates:1 against:5 frequency:1 notional:1 obvious:1 dm:4 naturally:1 associated:3 mi:8 modeler:1 sampled:6 rational:2 proved:1 treatment:1 gain:1 intrinsically:1 knowledge:1 obtainable:1 back:2 appears:1 dt:3 day:13 follow:1 methodology:2 wherein:3 april:1 done:1 evaluated:1 strongly:1 furthermore:5 implicit:1 stage:1 until:6 correlation:4 nonlinear:2 overlapping:1 gray:2 behaved:1 grows:1 usage:1 validity:1 contain:1 true:1 former:1 hence:2 entering:1 deal:1 during:1 maintained:1 covering:1 criterion:7 hill:1 complete:9 delivers:1 bring:1 percent:1 ranging:2 consideration:2 umontreal:1 common:1 functional:15 empirically:1 conditioning:3 endpoint:1 belong:1 significant:1 monthly:3 enter:1 meal:3 smoothness:1 rd:3 automatic:1 unconstrained:1 similarly:2 ramsay:1 portfolio:10 dot:1 moving:1 entail:1 impressive:1 longer:3 similarity:1 posterior:2 moderate:1 apart:1 minimum:3 additional:6 greater:1 impose:2 recognized:1 determine:2 period:16 july:6 semi:1 ii:2 multiple:4 dashed:1 full:2 smooth:1 match:1 characterized:1 determination:1 cross:5 long:7 divided:1 serial:1 converging:1 prediction:3 regression:5 expectation:2 iteration:2 represent:4 irregular:1 addition:3 want:2 separately:1 leaving:1 subject:1 regularly:3 spirit:1 near:2 presence:2 revealed:2 bengio:2 intermediate:1 enough:1 iterate:1 affect:1 fit:1 architecture:1 economic:5 idea:2 t0:18 whether:2 expression:1 becker:1 forecasting:31 adequate:1 generally:2 se:4 listed:1 amount:1 band:1 extensively:1 reduced:2 http:1 problematic:1 estimated:5 delta:3 per:2 correctly:1 kaug:1 blue:1 disjoint:2 hyperparameter:1 shall:3 express:1 four:1 threshold:1 expire:1 ht:1 asymptotically:1 year:21 uncertainty:2 place:1 throughout:1 decision:4 cattle:2 acceptable:1 comparable:1 bound:1 ki:4 followed:1 tackled:1 display:1 quadratic:3 annual:3 ahead:3 incorporation:1 kronecker:2 fda:2 flat:1 feeder:1 min:1 attempting:1 relatively:1 transferred:1 department:3 according:1 combination:2 march:2 conjugate:1 across:4 remain:1 increasingly:1 slightly:1 beneficial:1 qu:1 making:7 conceivably:1 leg:7 restricted:1 taken:3 remains:2 previously:2 committee:2 know:1 irregularly:2 subjected:1 end:4 available:9 operation:16 apply:1 observe:2 subtracted:1 original:1 denotes:2 top:3 include:2 cf:1 unifying:1 pt2:5 restrictive:1 especially:1 establish:1 murray:1 classical:2 society:1 objective:1 question:2 realized:3 added:1 quantity:1 parametric:1 already:2 usual:1 ccc:9 obermayer:1 exhibit:3 gradient:1 september:1 amongst:1 separate:1 degrade:1 extent:1 iro:1 assuming:1 length:4 index:9 illustration:2 ratio:8 downsampling:1 providing:1 difficult:1 unfortunately:1 holding:1 hog:1 negative:3 design:1 unknown:1 allowing:2 upper:1 observation:3 revised:2 benchmark:1 finite:2 t:3 timeseries:1 january:2 beat:2 gas:1 extended:1 incorporated:1 precise:1 arbitrary:1 canada:1 introduced:2 pair:2 required:1 specified:1 cotton:1 extrapolate:3 distinction:1 quadratically:1 established:1 beyond:4 proceeds:1 usually:3 below:3 pattern:2 max:1 including:1 royal:1 power:1 overlap:1 difficulty:1 rely:1 natural:1 shorting:1 business:1 mn:7 representing:2 scheme:1 identifies:1 axis:1 carried:1 dating:1 tresp:1 prior:3 literature:1 removal:1 relative:2 expect:2 plet:1 designation:1 allocation:1 var:6 validation:2 integrate:1 incurred:1 degree:1 sufficient:3 usda:1 autocovariances:1 editor:2 course:1 extrapolated:2 last:12 keeping:2 rasmussen:4 guide:2 allow:1 side:1 taking:2 sparse:1 distributed:2 benefit:1 curve:3 xn:2 evaluating:2 ending:1 resides:1 autoregressive:2 cumulative:3 made:10 avg:2 coincide:1 preprocessing:1 historical:2 far:1 transaction:4 excess:8 approximate:1 skill:1 mcgraw:1 keep:2 dealing:1 sequentially:1 active:1 assumed:4 it0:6 reasonableness:1 table:4 robust:1 nicolas:1 ca:1 correlated:1 drawdown:5 expansion:1 necessarily:1 vj:1 significance:3 spread:29 apr:1 whole:2 motivation:1 noise:1 hyperparameters:3 n2:2 nothing:1 profile:1 repeated:4 girard:1 edition:2 augmented:6 retrain:1 fashion:1 transduction:1 slow:1 sub:3 position:9 explicit:2 bengioy:1 lie:1 pe:1 treasury:1 specific:5 xt:1 bishop:1 er:1 admits:1 virtue:1 concern:1 incorporating:1 adding:1 sequential:2 importance:1 quionero:1 conditioned:1 horizon:13 forecast:45 margin:1 suited:1 smoothly:1 simply:3 likely:3 univariate:1 expressed:6 scalar:1 springer:1 corresponds:2 conditional:1 viewed:1 identity:2 month:4 price:27 yti:4 hard:2 included:1 determined:1 specifically:1 corrected:1 principal:1 conservative:1 total:3 accepted:1 formally:1 latter:1 macroeconomic:1 relevance:2 incorporate:2 princeton:1 handling:3 |
2,564 | 3,325 | A general agnostic active learning algorithm
Sanjoy Dasgupta
UC San Diego
dasgupta@cs.ucsd.edu
Daniel Hsu
UC San Diego
djhsu@cs.ucsd.edu
Claire Monteleoni
UC San Diego
cmontel@cs.ucsd.edu
Abstract
We present an agnostic active learning algorithm for any hypothesis class
of bounded VC dimension under arbitrary data distributions. Most previous work on active learning either makes strong distributional assumptions,
or else is computationally prohibitive. Our algorithm extends the simple
scheme of Cohn, Atlas, and Ladner [1] to the agnostic setting, using reductions to supervised learning that harness generalization bounds in a
simple but subtle manner. We provide a fall-back guarantee that bounds
the algorithm?s label complexity by the agnostic PAC sample complexity.
Our analysis yields asymptotic label complexity improvements for certain
hypothesis classes and distributions. We also demonstrate improvements
experimentally.
1
Introduction
Active learning addresses the issue that, in many applications, labeled data typically comes
at a higher cost (e.g. in time, effort) than unlabeled data. An active learner is given unlabeled
data and must pay to view any label. The hope is that significantly fewer labeled examples
are used than in the supervised (non-active) learning model. Active learning applies to a
range of data-rich problems such as genomic sequence annotation and speech recognition.
In this paper we formalize, extend, and provide label complexity guarantees for one of the
earliest and simplest approaches to active learning?one due to Cohn, Atlas, and Ladner [1].
The scheme of [1] examines data one by one in a stream and requests the label of any data
point about which it is currently unsure. For example, suppose the hypothesis class consists
of linear separators in the plane, and assume that the data is linearly separable. Let the
first six data be labeled as follows.
00
1
1 1
0
0
1
0
1
The learner does not need to request the label of the seventh point (indicated by the arrow)
because it is not unsure about the label: any straight line with the ?s and ?s on opposite
sides has the seventh point with the ?s. Put another way, the point is not in the region
of uncertainty [1], the portion of the data space for which there is disagreement among
hypotheses consistent with the present labeled data.
Although very elegant and intuitive, this approach to active learning faces two problems:
1. Explicitly maintaining the region of uncertainty can be computationally cumbersome.
2. Data is usually not perfectly separable.
1
Our main contribution is to address these problems. We provide a simple generalization
of the selective sampling scheme of [1] that tolerates adversarial noise and never requests
many more labels than a standard agnostic supervised learner would to learn a hypothesis
with the same error.
In the previous example, an agnostic active learner (one that does not assume a perfect
separator exists) is actually still uncertain about the label of the seventh point, because
all six of the previous labels could be inconsistent with the best separator. Therefore, it
should still request the label. On the other hand, after enough points have been labeled, if
an unlabeled point occurs at the position shown below, chances are its label is not needed.
10
00
1
1
1
0
1
0
01
1
0
0
1
0
1
0
1
1
0
1
0
1
0
0
1
1
0
1
0
1
0
1
0
0
1
10
0
1
To extend the notion of uncertainty to the agnostic setting, we divide the sampled data into
two groups, S and T : S contains the data for which we have determined the label ourselves
(we explain below how to ensure that they are consistent with the best separator in the class)
and T contains the data for which we have explicitly requested a label. Now, somewhat
counter-intuitively, the labels in S are completely reliable, whereas the labels in T could
be inconsistent with the best separator. To decide if we are uncertain about the label of a
new point x, we reduce to a supervised learning task: for each possible label y? ? {?1}, we
learn a hypothesis hy? consistent with the labels in S ? {(x, y?)} and with minimal empirical
error on T . If, say, the error of the hypothesis h+1 is much larger than that of h?1 , we can
safely infer that the best separator must also label x with ?1 without requesting a label;
if the error difference is only modest, we explicitly request a label. Standard generalization
bounds for an i.i.d. sample let us perform this test by comparing empirical errors on S ? T .
The last claim may sound awfully suspicious, because S ? T is not i.i.d.! Indeed, this is in a
sense the core sampling problem that has always plagued active learning: the labeled sample
T might not be i.i.d. (due to the filtering of examples based on an adaptive criterion), while
S only contains unlabeled examples (with made-up labels). Nevertheless, we prove that
in our case, it is in fact correct to effectively pretend S ? T is an i.i.d. sample. A direct
consequence is that the label complexity of our algorithm (the number of labels requested
before achieving a desired error) is never much more than the usual sample complexity of
supervised learning (and in some cases, is significantly less).
An important algorithmic detail is the specific choice of generalization bound we use in
deciding whether to request a label or not. The usual additive bounds with rate n?1/2
are too loose, e.g. we know in the zero-error case the rate should be n?1 . Our algorithm
magnifies this small polynomial difference in the bound into an exponential difference in
label complexity, so it is crucial for us to use a good bound. We use a normalized bound
that takes into account the empirical error (computed on S?T ) of the hypothesis in question.
In this paper, we present and analyze a simple agnostic active learning algorithm for general
hypothesis classes of bounded VC dimension. It extends the selective sampling scheme of
Cohn et al. [1] to the agnostic setting, using normalized generalization bounds, which we
apply in a simple but subtle manner. For certain hypothesis classes and distributions, our
analysis yields improved label complexity guarantees over the standard sample complexity
of supervised learning. We also demonstrate such improvements experimentally.
1.1
Related work
Our algorithm extends the selective sampling scheme of Cohn et al. [1] (described above)
to the agnostic setting. Most previous work on active learning either makes strong distributional assumptions (e.g. separability, uniform input distribution) [1?8], or is generally
computationally prohibitive [2, 4, 9]. See [10] for a discussion of these results.
A natural way to formulate active learning in the agnostic setting is to ask the learner to
return a hypothesis with error at most ? + ? (where ? is the error of the best hypothesis in
2
the specified class) using as few labels as possible. A basic constraint on the label complexity
was pointed out by K?
a?
ari?
ainen [11], who showed that for any ? ? (0, 1/2), there are data
distributions that force any active learner that achieves error at most ? + ? to request
?((?/?)2 ) labels. The first rigorously-analyzed agnostic active learning algorithm, called
A2 , was developed recently by Balcan, Beygelzimer, and Langford [9]. Like Cohn-AtlasLadner [1], this algorithm uses a region of uncertainty, although the lack of separability
complicates matters and A2 ends up explicitly maintaining an ?-net of the hypothesis space.
Subsequently, Hanneke [12] characterized the label complexity of the A2 algorithm in terms
of a parameter called the disagreement coefficient.
Our work was inspired by both [1] and [9], and we have built heavily upon their insights. Our
algorithm overcomes their complications by employing reductions to supervised learning.1
We bound the label complexity of our method in terms of the same parameter as used for
A2 [12], and get a somewhat better dependence (linear rather than quadratic).
2
2.1
Preliminaries
Learning framework and uniform convergence
Let X be the input space, D a distribution over X ? {?1} and H a class of hypotheses
h : X ? {?1} with VC dimension vcdim(H) = d < ? (the finiteness ensures the nth
shatter coefficient S(H, n) is at most O(nd ) by Sauer?s lemma). We denote by DX the
marginal of D over X . In our active learning model, the learner receives unlabeled data
sampled from DX ; for any sampled point x, it can optionally request the label y sampled
from the conditional distribution at x. This process can be viewed as sampling (x, y) from D
and revealing only x to the learner, keeping the label y hidden unless the learner explicitly
requests it. The error of a hypothesis h under D is errD (h) = Pr(x,y)?D
P [h(x) 6= y], and on a
finite sample Z ? X ?{?1}, the empirical error of h is err(h, Z) = (x,y)?Z 1l[h(x) 6= y]/|Z|,
where 1l[?] is the 0-1 indicator function. We assume for simplicity that the minimal error
? = inf{errD (h) : h ? H} is achieved by a hypothesis h? ? H.
Our algorithm uses the following normalized uniform convergence bound [14, p. 200].
Lemma 1 (Vapnik and Chervonenkis [15]). Let F be a family of measurable functions
f : Z ? {0, 1} over
pa space Z. Denote by EZ f the empirical average of f over a subset
Z ? Z. Let ?n = (4/n) ln(8S(F, 2n)/?). If Z is an i.i.d. sample of size n from a fixed
distribution over Z, then, with probability at least 1 ? ?, for all f ? F:
p
p
p
p
? min ?n EZ f , ?n2 + ?n Ef ? Ef ? EZ f ? min ?n2 + ?n EZ f , ?n Ef .
2.2
Disagreement coefficient
We will bound the label complexity of our algorithm in terms of (a slight variation of) the
disagreement coefficient ? introduced in [12] for analyzing the label complexity of A2 .
Definition 1. The disagreement metric ? on H is defined by ?(h, h? ) = Prx?DX [h(x) 6=
h? (x)]. The disagreement coefficient ? = ?(D, H, ?) > 0 is
Prx?DX [?h ? B(h? , r) s.t. h(x) 6= h? (x)]
:r ??+?
? = sup
r
where B(h, r) = {h? ? H : ?(h, h? ) < r}, h? = arg inf h?H errD (h), and ? = errD (h? ).
The quantity ? bounds the rate at which the disagreement mass of the ball B(h? , r) ? the
probability mass of points on which hypotheses in B(h? , r) disagree with h? ? grows as a
function of the radius r. Clearly, ? ? 1/(? + ?); furthermore, it is a constant bounded
1
It has been noted that the Cohn-Atlas-Ladner scheme can easily be made tractable using a
reduction to supervised learning in the separable case [13, p. 68]. Although our algorithm is most
naturally seen as an extension of Cohn-Atlas-Ladner, a similar reduction to supervised learning (in
the agnostic setting) can be used for A2 [10].
3
Algorithm 1
Input: stream (x1 , x2 , . . . , xm ) i.i.d. from DX
Initially, S0 ? ? and T0 ? ?.
For n = 1, 2, . . . , m:
1. For each y? ? {?1}, let hy? ? LEARNH (Sn?1 ? {(xn , y?)}, Tn?1 ).
2. If err(h??y , Sn?1 ? Tn?1 ) ? err(hy?, Sn?1 ? Tn?1 ) > ?n?1 for some y? ? {?1}
(or if no such h??y is found)
then Sn ? Sn?1 ? {(xn , y?)} and Tn ? Tn?1 .
3. Else request yn ; Sn ? Sn?1 and Tn ? Tn?1 ? {(xn , yn )}.
Return hf = LEARNH (Sm , Tm ).
Figure 1: The agnostic selective sampling algorithm. See (1) for how to set ?n .
independently of 1/(? + ?) in several cases previously considered in the literature [12]. For
example, if H is homogeneous linear
separators and DX is the uniform distribution over the
?
unit sphere in Rd , then ? = ?( d).
3
Agnostic selective sampling
Here we state and analyze our general algorithm for agnostic active learning. The main
techniques employed by the algorithm are reductions to a supervised learning task and
generalization bounds applied to differences of empirical errors.
3.1
A general algorithm for agnostic active learning
Figure 1 states our algorithm in full generality. The input is a stream of m unlabeled
?
examples drawn i.i.d from DX ; for the time being, m can be thought of as O((d/?)(1
+ ?/?))
where ? is the accuracy parameter.2
For S, T ? X ? {?1}, let LEARNH (S, T ) denote a supervised learner that returns a hypothesis h ? H consistent with S, and with minimum error on T . Algorithm 1 maintains
two sets of labeled examples, S and T , each of which is initially empty. Upon receiving xn ,
it learns two hypotheses, hy? = LEARNH (S ? {(xn , y?)}, T ) for y? ? {?1}, and then compares
their empirical errors on S ? T . If the difference is large enough3 , it is possible to infer
how h? labels xn (as we show in Lemma 3). In this case, the algorithm adds xn , with this
inferred label, to S. Otherwise, the algorithm requests the label yn and adds (xn , yn ) to T .
Thus, S contains examples with inferred labels consistent with h? , and T contains examples
with their requested labels. Because h? might err on some examples in T , we just insist
that LEARNH find a hypothesis with minimal error on T . Meanwhile, by construction, h?
is consistent with S, so we require LEARNH to only consider hypotheses consistent with S.
3.2
Bounds for error differences
We still need to specify ?n , the threshold value for error differences that determines whether
the algorithm requests a label or not. Intuitively, ?n should reflect how closely empirical
errors on a sample approximate true errors on the distribution D.
The setting of ?n can only depend on observable quantities, so we first clarify the distinction
between empirical errors on Sn ? Tn and those with respect to the true (hidden) labels.
Definition 2. Let Sn and Tn be as defined in Algorithm 1. Let Sn! be the set of labeled
examples identical to those in Sn , except with the true hidden labels swapped in. Thus, for
example, Sn! ? Tn is an i.i.d. sample from D of size n. Finally, let
err!n (h) = err(h, Sn! ? Tn )
and
errn (h) = err(h, Sn ? Tn ).
? notation suppresses log 1/? and terms polylogarithmic in those that appear.
The O
If LEARNH cannot find a hypothesis consistent with S ? {(xn , y)} for some y, then it is clear
that h? (x) = ?y. In this case, we simply add (xn , ?y) to S, regardless of ?n?1 .
2
3
4
It is straightforward to apply Lemma 1 to empirical errors on Sn! ? Tn , i.e. to err!n (h), but
we cannot use such bounds algorithmically: we do not request the true labels for points in
Sn and thus cannot reliably compute err!n (h). What we can compute are error differences
err!n (h) ? err!n (h? ) for pairs of hypotheses (h, h? ) that agree on (and make the same mistakes
on) Sn , since for such pairs, we have err!n (h) ? err!n (h? ) = errn (h) ? errn (h? ).
+
?
Definition 3. For a pair (h, h? ) ? H ? H, define gh,h
? (x, y) = 1l[h(x) 6= y ? h (x) = y] and
?
?
gh,h
? (x, y) = 1l[h(x) = y ? h (x) 6= y].
+
?
With this notation, we have err(h, Z) ? err(h? , Z) = EZ [gh,h
? ] ? EZ [gh,h? ] for any Z ? X ?
+
?
?
?
{?1}. Now, applying Lemma 1 to G = {gh,h
? : (h, h ) ? H ? H} = {gh,h? : (h, h ) ? H ? H},
2
and noting that S(G, n) ? S(H, n) , gives the following lemma.
p
Lemma 2. Let ?n = (4/n) ln(8S(H, 2n)2 /?). With probability at least 1 ? ? over an
i.i.d. sample Z of size n from D, we have for all (h, h? ) ? H ? H,
q
q
+
?
err(h, Z) ? err(h? , Z) ? errD (h) ? errD (h? ) + ?n2 + ?n
EZ [gh,h
EZ [gh,h
.
?] +
?]
p
Corollary 1. Let ?n = (4/n) ln(8(n2 + n)S(H, 2n)2 /?). Then, with probability at least
1 ? ?, for all n ? 1 and all (h, h? ) ? H ? H consistent with Sn , we have
p
p
errn (h) ? errn (h? ) ? errD (h) ? errD (h? ) + ?n2 + ?n ( errn (h) + errn (h? )).
Proof. Applying Lemma 2 to each Sn! ? Tn (replacing ? with ?/(n2 + n)) and a union
bound implies, with probability at least 1 ? ?, the bounds in Lemma 2 hold simultaneously
for all n ? 1 and all (h, h? ) ? H2 with Sn! ? Tn in place of Z. The corollary follows
+
because err!n (h) ? err!n (h? ) = errn (h) ? errn (h? ); and because gh,h
? (x, y) ? 1l[h(x) 6= y] and
?
+
?
?
gh,h? (x, y) ? 1l[h (x) 6= y] for (h, h ) consistent with Sn , so ESn! ?Tn [gh,h
? ] ? errn (h) and
?
?
ESn! ?Tn [gh,h? ] ? errn (h ).
Corollary 1 implies that we can effectively apply the normalized uniform convergence bounds
from Lemma 1 to empirical error differences on Sn ? Tn , even though Sn ? Tn is not an
i.i.d. sample from D. In light of this, we use the following setting of ?n :
p
p
?n := ?n2 + ?n
errn (h+1 ) + errn (h?1 )
(1)
p
p
? d log n/n) as per Corollary 1.
where ?n = (4/n) ln(8(n2 + n)S(H, 2n)2 /?) = O(
3.3
Correctness and fall-back analysis
We now justify our setting of ?n with a correctness proof and fall-back guarantee.
Lemma 3. With probability at least 1 ? ?, the hypothesis h? = arg inf h?H errD (h) is consistent with Sn for all n ? 0 in Algorithm 1.
Proof. Apply the bounds in Corollary 1 and proceed by induction on n. The base case is
trivial since S0 = ?. Now assume h? is consistent with Sn . Suppose upon receiving xn+1 ,
we discover errn (h+1 ) ? errn (h?1 ) > ?n . We will show that h? (xn+1 ) = ?1 (assume both
h+1 and h?1 exist, since it is clear h? (xn+1 ) = ?1 if h+1 does not exist). Suppose for the
sake of contradiction that h? (xn+1 ) = +1. We know the p
that errn (h? ) ?
p errn (h+1 ) (by
2
inductive hypothesis) and errn (h+1 ) ? errn (h?1 ) > ?n + ?n ( errn (h+1 ) + errn (h?1 )). In
particular, errn (h+1 ) > ?n2 . Therefore,
errn (h? ) ? errn (h?1 ) = (errn (h? ) ? errn (h+1 )) + (errn (h+1 ) ? errn (h?1 ))
p
p
p
p
p
> errn (h+1 )( errn (h? ) ? errn (h+1 )) + ?n2 + ?n ( errn (h+1 ) + errn (h?1 ))
p
p
p
p
> ?n ( errn (h? ) ? errn (h+1 )) + ?n2 + ?n ( errn (h+1 ) + errn (h?1 ))
p
p
= ?n2 + ?n ( errn (h? ) + errn (h?1 )).
Now Corollary 1 implies that errD (h? ) > errD (h?1 ), a contradiction.
5
Theorem 1. Let ? = inf h?H errD (h) and d = vcdim(H). There exists a constant c > 0
such that the following holds. If Algorithm 1 is given a stream of m unlabeled examples,
then with probability at least 1 ? ?, p
the algorithm returns a hypothesis with error at most
? + c ? ((1/m)(d log m + log(1/?)) + (?/m)(d log m + log(1/?))).
Proof. Lemma 3 implies that h? is consistent with Sm with probability at least 1 ? ?. Using
the same bounds from Corollary 1 (already applied in Lemma 3) on h? and h
pf together
?
2
with the fact errm (hf ) ? errm (h? ), we have errD (hf ) ? ? + ?m
+ ?m ? + ?m errD (hf ),
?
2
which in turn implies errD (hf ) ? ? + 3?m
+ 2?m ?.
?
So, Algorithm 1 returns a hypothesis with error at most ? + ? when m = O((d/?)(1
+
?/?)); this is (asymptotically) the usual sample complexity of supervised learning. Since the
?
algorithm requests at most m labels, its label complexity is always at most O((d/?)(1+?/?)).
3.4
Label complexity analysis
We can also bound the label complexity of our algorithm in terms of the disagreement
coefficient ?. This yields tighter bounds when ? is bounded independently of 1/(? + ?). The
key to deriving our label complexity bounds based on ? is noting that the probability of
requesting the (n + 1)th label is intimately related to ? and ?n (see [10] for the complete
proof).
Lemma 4. There exists a constant c > 0 such that, with probability at least 1 ? 2?, for all
n ? 1, the following holds. Let h? (xn+1 ) = y? where h? = arg inf h?H errD (h). Then, the
probability that Algorithm 1 requests the label yn+1 is Prxn+1 ?DX [Request yn+1 ] ? c ? ? ? (? +
?
2
?
?n2 ), where
p ? = ?(D, H, 3?m + 2?m ?) is the disagreement coefficient, ? = errD (h ), and
?
?n = O( d log n/n) is as defined in Corollary 1.
Now we give our main label complexity bound for agnostic active learning.
Theorem 2. Let m be the number of unlabeled data given to Algorithm 1, d =?vcdim(H),
2
? = inf h?H errD (h), ?m as defined in Corollary 1, and ? = ?(D, H, 3?m
+ 2?m ?). There
exists a constant c1 > 0 such that for any c2 ? 1, with probability at least 1 ? 2?:
2
1. If ? ? (c2 ? 1)?m
, Algorithm 1 returns a hypothesis with error as bounded in Theorem 1 and the expected number of labels requested is at most
1
2
1 + c1 c2 ? ? d log m + log log m .
?
2. Else, the same holds except the expected number of labels requested is at most
1
2
1 + c1 ? ? ?m + d log m + log log m .
?
Furthermore, if L is the expected number of labels requestedpas per above, then with probability
at least 1 ? ? ? , the algorithm requests no more than L + 3L log(1/? ? ) labels.
Proof. Follows from Lemma 4 and a Chernoff bound for the Poisson trials 1l[Request yn ].
?
2
With the substitution ? = 3?m
+ 2?m ? as per Theorem 1, Theorem 2 entails that for any
hypothesis class and data distribution for which the disagreement coefficient ? = ?(D, H, ?)
is bounded independently of 1/(? + ?) (see [12] for some examples), Algorithm 1 only needs
2
?
?
O(?d
log2 (1/?)) labels to achieve error ? ? ? and O(?d(log
(1/?) + (?/?)2 )) labels to achieve
error ? ? ?. The latter matches the dependence on ?/? in the ?((?/?)2 ) lower bound [11].
The linear dependence on ? improves on the quadratic dependence required by A2 [12]4 . For
an illustrative consequence of this, suppose DX is the uniform distribution on the sphere
4
It may be possible to reduce A2 ?s quadratic dependence to a linear dependence by using normalized bounds, as we do here.
6
4000
3500
3500
3000
3000
2500
2500
2000
2000
1500
1500
1000
1000
500
500
0
0
0
5000
10000
(a)
0
5000
10000
(b)
(c)
(d)
Figure 2: (a & b) Labeling rate plots. The plots show the number of labels requested
(vertical axis) versus the total number of points seen (labeled + unlabeled, horizontal axis)
using Algorithm 1. (a) H = thresholds: under random misclassification noise with ? =
0 (solid), 0.1 (dashed), 0.2 (dot-dashed); under the boundary noise model with ? = 0.1
(lower dotted), 0.2 (upper dotted). (b) H = intervals: under random misclassification with
(p+ , ?) = (0.2, 0.0) (solid), (0.1, 0.0) (dashed), (0.2, 0.1) (dot-dashed), (0.1, 0.1) (dotted). (c
& d) Locations of label requests. (c) H = intervals, h? = [0.4, 0.6]. The top histogram
shows the locations of first 400 label requests (the x-axis is the unit interval); the bottom
histogram is for all (2141) label requests. (d) H = boxes, h? = [0.15, 0.85]2 . The first 200
requests occurred at the ?s, the next 200 at the ?s, and the final 109 at the
s.
?
in Rd and H is homogeneous linear separators; in this case, ? = ?( d). Then the label
complexity of A2 depends at least quadratically on the dimension, whereas the corresponding
dependence for our algorithm is d3/2 .
4
Experiments
We implemented Algorithm 1 in a few simple cases to experimentally demonstrate the
label complexity improvements. In each case, the data distribution DX was uniform over
[0, 1]; the stream length was m = 10000, and each experiment was repeated 20 times with
different random seeds. Our first experiment studied linear thresholds on the line. The
target hypothesis was fixed to be h? (x) = sign(x ? 0.5). For this hypothesis class, we used
two different noise models, each of which ensured inf h?H errD (h) = errD (h? ) = ? for a prespecified ? ? [0, 1]. The first model was random misclassification: for each point x ? DX ,
we independently labeled it h? (x) with probability 1 ? ? and ?h? (x) with probability ?. In
the second model (also used in [7]), for each point x ? DX , we independently labeled it +1
with probability (x?0.5)/(4?)+0.5 and ?1 otherwise, thus concentrating the noise near the
boundary. Our second experiment studied intervals on the line. Here, we only used random
misclassification, but we varied the target interval length p+ = Prx?DX [h? (x) = +1].
The results show that the number of labels requested by Algorithm 1 was exponentially
smaller than the total number of data seen (m) under the first noise model, and was polynomially smaller under the second noise model (see Figure 2 (a & b); we verified the polynomial vs. exponential distinction on separate log-log scale plots). In the case of intervals, we
observe an initial phase (of duration roughly ? 1/p+ ) in which every label is requested, followed by a more efficient phase, confirming the known active-learnability of this class [4,12].
These improvements show that our algorithm needed significantly fewer labels to achieve
the same error as a standard supervised algorithm that uses labels for all points seen.
As a sanity check, we examined the locations of data for which Algorithm 1 requested a
label. We looked at two particular runs of the algorithm: the first was with H = intervals,
p+ = 0.2, m = 10000, and ? = 0.1; the second was with H = boxes (d = 2), p+ = 0.49,
m = 1000, and ? = 0.01. In each case, the data distribution was uniform over [0, 1]d , and
the noise model was random misclassification. Figure 2 (c & d) shows that, early on, labels
were requested everywhere. But as the algorithm progressed, label requests concentrated
near the boundary of the target hypothesis.
7
5
Conclusion and future work
We have presented a simple and natural approach to agnostic active learning. Our extension
of the selective sampling scheme of Cohn, et al. [1]
1. simplifies the maintenance of the region of uncertainty with a reduction to supervised learning, and
2. guards against noise with a subtle algorithmic application of generalization bounds.
Our algorithm relies on a threshold parameter ?n for comparing empirical errors. We
prescribe a very simple and natural choice for ?n ? a normalized generalization bound from
supervised learning ? but one could hope for a more clever or aggressive choice, akin to
those in [6] for linear separators.
Finding consistent hypotheses when data is separable is often a simple task. In such cases,
reduction-based active learning algorithms can be relatively efficient (answering some questions posed in [16]). On the other hand, agnostic learning suffers from severe computational
intractability for many hypothesis classes (e.g. [17]), and of course, agnostic active learning
is at least as hard in the worst case. Our reduction is relatively benign in that the learning
problems created are only over samples from the original distribution, so we do not create pathologically hard instances (like those arising from hardness reductions) unless they
are inherent in the data. Nevertheless, an important research direction is to develop algorithms that only require solving tractable (e.g. convex) optimization problems. A similar
reduction-based scheme may be possible.
References
[1] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine
Learning, 15(2):201?221, 1994.
[2] Y. Freund, H. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by
committee algorithm. Machine Learning, 28(2):133?168, 1997.
[3] S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. In
COLT, 2005.
[4] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005.
[5] S. Hanneke. Teaching dimension and the complexity of active learning. In COLT, 2007.
[6] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In COLT, 2007.
[7] R. Castro and R. Nowak. Upper and lower bounds for active learning. In Allerton Conference
on Communication, Control and Computing, 2006.
[8] R. Castro and R. Nowak. Minimax bounds for active learning. In COLT, 2007.
[9] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, 2006.
[10] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. UCSD
Technical Report CS2007-0898, http://www.cse.ucsd.edu/?djhsu/papers/cal.pdf, 2007.
[11] M. K?
a?
ari?
ainen. Active learning in the non-realizable case. In ALT, 2006.
[12] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007.
[13] C. Monteleoni. Learning with online constraints: shifting concepts and active learning. PhD
Thesis, MIT Computer Science and Artificial Intelligence Laboratory, 2006.
[14] O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. Lecture
Notes in Artificial Intelligence, 3176:169?207, 2004.
[15] V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events
to their probabilities. Theory of Probability and its Applications, 16:264?280, 1971.
[16] C. Monteleoni. Efficient algorithms for general active learning. In COLT. Open problem, 2006.
[17] V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. In FOCS,
2006.
8
| 3325 |@word trial:1 polynomial:2 nd:1 open:1 solid:2 reduction:10 initial:1 substitution:1 contains:5 chervonenkis:2 daniel:1 err:19 comparing:2 beygelzimer:2 dx:13 must:2 additive:1 benign:1 confirming:1 atlas:5 ainen:2 plot:3 v:1 intelligence:2 prohibitive:2 fewer:2 plane:1 core:1 prespecified:1 coarse:1 complication:1 location:3 cse:1 allerton:1 zhang:1 shatter:1 c2:3 direct:1 guard:1 suspicious:1 consists:1 prove:1 focs:1 manner:2 expected:3 hardness:2 indeed:1 magnifies:1 roughly:1 inspired:1 insist:1 pf:1 discover:1 bounded:6 notation:2 agnostic:24 mass:2 what:1 suppresses:1 developed:1 finding:1 guarantee:4 safely:1 every:1 ensured:1 control:1 unit:2 yn:7 appear:1 before:1 mistake:1 consequence:2 analyzing:1 lugosi:1 might:2 studied:2 examined:1 range:1 union:1 empirical:12 significantly:3 revealing:1 thought:1 get:1 cannot:3 unlabeled:9 clever:1 cal:1 put:1 applying:2 www:1 measurable:1 straightforward:1 regardless:1 independently:5 duration:1 convex:1 formulate:1 simplicity:1 contradiction:2 examines:1 insight:1 deriving:1 notion:1 variation:1 diego:3 suppose:4 heavily:1 construction:1 target:3 shamir:1 homogeneous:2 us:3 prescribe:1 hypothesis:34 pa:1 recognition:1 distributional:2 labeled:11 bottom:1 worst:1 region:4 ensures:1 counter:1 halfspaces:1 complexity:25 seung:1 rigorously:1 depend:1 solving:1 upon:3 learner:10 completely:1 easily:1 query:1 artificial:2 labeling:1 sanity:1 larger:1 posed:1 say:1 otherwise:2 final:1 online:1 sequence:1 net:1 achieve:3 intuitive:1 convergence:4 empty:1 perfect:1 develop:1 tolerates:1 strong:2 implemented:1 c:3 come:1 implies:5 errn:39 direction:1 radius:1 closely:1 correct:1 subsequently:1 vc:3 vcdim:3 require:2 generalization:9 preliminary:1 tighter:1 extension:2 clarify:1 hold:4 considered:1 plagued:1 deciding:1 seed:1 algorithmic:2 claim:1 achieves:1 early:1 a2:9 label:74 currently:1 correctness:2 create:1 djhsu:2 hope:2 mit:1 clearly:1 genomic:1 always:2 rather:1 kalai:1 earliest:1 corollary:9 improvement:5 check:1 adversarial:1 sense:1 realizable:1 typically:1 initially:2 hidden:3 selective:7 issue:1 among:1 arg:3 colt:5 uc:3 marginal:1 never:2 sampling:9 chernoff:1 identical:1 icml:2 progressed:1 future:1 report:1 inherent:1 few:2 simultaneously:1 phase:2 ourselves:1 severe:1 analyzed:1 light:1 nowak:2 sauer:1 modest:1 unless:2 divide:1 desired:1 minimal:3 uncertain:2 complicates:1 instance:1 learnh:7 cost:1 subset:1 uniform:9 seventh:3 too:1 learnability:1 tishby:1 broder:1 receiving:2 together:1 thesis:1 reflect:1 return:6 account:1 aggressive:1 coefficient:8 matter:1 explicitly:5 depends:1 stream:5 view:1 analyze:2 sup:1 portion:1 hf:5 maintains:1 errd:20 annotation:1 contribution:1 accuracy:1 who:1 yield:3 raghavendra:1 hanneke:3 straight:1 explain:1 monteleoni:5 cumbersome:1 suffers:1 definition:3 against:1 esn:2 frequency:1 naturally:1 proof:6 hsu:2 sampled:4 concentrating:1 ask:1 improves:1 subtle:3 formalize:1 actually:1 back:3 higher:1 supervised:15 harness:1 specify:1 improved:1 though:1 box:2 generality:1 furthermore:2 just:1 langford:2 hand:2 receives:1 horizontal:1 replacing:1 cohn:9 lack:1 indicated:1 grows:1 normalized:6 true:4 concept:1 inductive:1 boucheron:1 laboratory:1 noted:1 illustrative:1 criterion:1 pdf:1 complete:1 demonstrate:3 tn:19 gh:12 balcan:3 ef:3 ari:2 recently:1 exponentially:1 extend:2 slight:1 occurred:1 rd:2 pointed:1 teaching:1 dot:2 entail:1 add:3 base:1 showed:1 inf:7 certain:2 seen:4 minimum:1 somewhat:2 employed:1 dashed:4 full:1 sound:1 infer:2 technical:1 match:1 characterized:1 sphere:2 basic:1 maintenance:1 metric:1 poisson:1 histogram:2 achieved:1 c1:3 whereas:2 interval:7 else:3 finiteness:1 crucial:1 swapped:1 elegant:1 inconsistent:2 near:2 noting:2 enough:1 perfectly:1 opposite:1 reduce:2 simplifies:1 tm:1 requesting:2 t0:1 whether:2 six:2 guruswami:1 effort:1 akin:1 speech:1 proceed:1 generally:1 clear:2 concentrated:1 simplest:1 http:1 exist:2 dotted:3 sign:1 algorithmically:1 per:3 arising:1 dasgupta:5 group:1 key:1 nevertheless:2 threshold:4 achieving:1 drawn:1 d3:1 verified:1 asymptotically:1 run:1 everywhere:1 uncertainty:5 extends:3 family:1 place:1 decide:1 bound:34 pay:1 followed:1 quadratic:3 constraint:2 x2:1 hy:4 sake:1 bousquet:1 min:2 separable:4 relatively:2 request:23 unsure:2 ball:1 smaller:2 separability:2 intimately:1 castro:2 intuitively:2 pr:1 computationally:3 ln:4 agree:1 previously:1 turn:1 loose:1 committee:1 needed:2 know:2 tractable:2 end:1 apply:4 observe:1 disagreement:10 original:1 top:1 ensure:1 log2:1 maintaining:2 pretend:1 question:2 quantity:2 occurs:1 already:1 looked:1 dependence:7 usual:3 pathologically:1 separate:1 trivial:1 induction:1 length:2 optionally:1 reliably:1 perform:1 ladner:5 disagree:1 vertical:1 upper:2 sm:2 finite:1 communication:1 ucsd:5 varied:1 arbitrary:1 inferred:2 introduced:1 pair:3 required:1 specified:1 distinction:2 quadratically:1 polylogarithmic:1 nip:1 address:2 usually:1 below:2 xm:1 built:1 reliable:1 shifting:1 misclassification:5 event:1 natural:3 force:1 indicator:1 nth:1 minimax:1 scheme:8 axis:3 created:1 sn:25 literature:1 asymptotic:1 relative:1 freund:1 lecture:1 filtering:1 versus:1 h2:1 consistent:14 s0:2 cmontel:1 intractability:1 claire:1 course:1 last:1 keeping:1 side:1 perceptron:1 fall:3 face:1 boundary:3 dimension:5 xn:15 rich:1 made:2 adaptive:1 san:3 employing:1 polynomially:1 approximate:1 observable:1 overcomes:1 active:37 learn:2 improving:1 requested:10 separator:9 meanwhile:1 main:3 linearly:1 arrow:1 noise:10 n2:13 prx:3 repeated:1 x1:1 position:1 exponential:2 answering:1 learns:1 theorem:5 specific:1 pac:1 alt:1 exists:4 vapnik:2 effectively:2 phd:1 margin:1 simply:1 ez:8 applies:1 errm:2 chance:1 determines:1 relies:1 conditional:1 viewed:1 experimentally:3 hard:2 determined:1 except:2 justify:1 lemma:15 called:2 sanjoy:1 total:2 latter:1 |
2,565 | 3,326 | Predicting Brain States from fMRI Data:
Incremental Functional Principal Component
Regression
S. Ghebreab
ISLA/HCS lab, Informatics Institute
University of Amsterdam, The Netherlands
ghebreab@science.uva.nl
A.W.M. Smeulders
ISLA lab, Informatics Institute
University of Amsterdam, The Netherlands
smeulders@science.uva.nl
P. Adriaans
HCS lab, Informatics Institute
University of Amsterdam, The Netherlands
pietera@science.uva.nl
Abstract
We propose a method for reconstruction of human brain states directly from functional neuroimaging data. The method extends the traditional multivariate regression analysis of discretized fMRI data to the domain of stochastic functional
measurements, facilitating evaluation of brain responses to complex stimuli and
boosting the power of functional imaging. The method searches for sets of voxel
time courses that optimize a multivariate functional linear model in terms of R2 statistic. Population based incremental learning is used to identify spatially distributed brain responses to complex stimuli without attempting to localize function first. Variation in hemodynamic lag across brain areas and among subjects
is taken into account by voxel-wise non-linear registration of stimulus pattern to
fMRI data. Application of the method on an international test benchmark for
prediction of naturalistic stimuli from new and unknown fMRI data shows that
the method successfully uncovers spatially distributed parts of the brain that are
highly predictive of a given stimulus.
1
Introduction
To arrive at a better understanding of human brain function, functional neuroimaging traditionally
studies the brain?s responses to controlled stimuli. Controlled stimuli have the benefit of leading to
clear and often localized response signals in fMRI as they are specifically designed to affect only
certain brain functions. The drawback of controlled stimuli is that they are a reduction of reality: one
cannot be certain whether the response is due to the reduction or due to the stimulus. Naturalistic
stimuli open the possibility to avoid the question whether the response is due to the reduction or
the signal. Naturalistic stimuli, however, carry a high information content in their spatio-temporal
structure that is likely to instigate complex brain states. The immediate consequence hereof is that
one faces the task of isolating relevant responses amids complex patterns.
To reveal brain responses to naturalistic stimuli, advanced signal processing methods are required
that go beyond conventional mass univariate data analysis. Univariate techniques generally lack
sufficient power to capture the spatially distributed response of the brain to naturalistic stimuli. Multivariate pattern techniques, on the other hand, have the capacity to identify patterns of information
when they are present across the full spatial extent of the brain without attempting to localize func-
tion. Here, we propose a multivariate pattern analysis approach for predicting naturalistic stimuli
on the basis of fMRI data. Inverting the task from correlating stimuli with fMRI data to predicting
stimuli from fMRI data makes it easier to evaluate brain responses to naturalistic stimuli and may
extend the power of functional imaging substantially [1].
Various multivariate approaches for reconstruction of brain states directly from fMRI measurements
have recently been proposed. In most of these approaches, a classifier is trained directly on the fMRI
data to discriminate between known different brain states. This classifier is then used to predict brain
states on the basis of new and unknown fMRI data alone. Such approaches have been used to predict
what percept is dominant in a binocular rivalry protocol [2], what the orientation is of structures subjects are viewing [3] and what the semantic category is of objects [4] and words [5] subjects see on
a screen. In one competition [6], participants trained pattern analyzers on fMRI of subjects viewing
two short movies as well as on the subject?s movie feature ratings. Then participants employed the
analyzers to predict the experience of subjects watching a third movie based purely on fMRI data.
Very accurate predictions were reported for identifying the presence of specific time varying movie
features (e.g. faces, motion) and the observers who coded the movies [7].
We propose an incremental multivariate linear modeling approach for functional covariates, i.e.
where both the fMRI data and external stimuli are continuous. This approach differs fundamentally
from existing multivariate linear approaches (e.g. [8]) that instantly fit a given model to the data
within the linear framework under the assumption that both the data and the model are discrete.
Contemporary neuroimaging studies increasingly use high-resolution fMRI to accurately capture
continuous brain processes, frequently instigated by continuous stimulations. Hence, we propose
the use of functional data analysis [9], which treats data, or the processes giving rise to them, as
functions. This not only allows to overcome limitations in neuroimaing studies due to the large
number of data points compared to the number of samples, but also allows to exploit the fact that
functions defined on a specific domain form an inner product vector space, and in most circumstances can be treated algebraically like vectors [10].
We extend classical multivariate regression analysis of fMRI data [11] to stochastic functional measurements. We show that, cast into an incremental pattern searching framework, functional multivariate regression provides a powerful technique for fMRI-based prediction of naturalistic stimuli.
2
Method
In the remainder, we consider stimuli data and data produced by fMRI scanners as continuous functions of time, sampled at the scan interval and subject to observational noise. We treat the data
within a functional linear model where both the predictant and predictor are functional, but where
the design matrix that takes care of the linear mapping between the two is vectorial.
2.1
The Predictor
The predictor data are derived directly from the four-dimensional fMRI data I(x, t), where x ? ?3
denotes the spatial position of a voxel and t denotes its temporal position. We represent each of
the S voxel time courses in functional form by f s (t), with t denoting the continuous path parameter
and s = 1, ..., S . Rather than directly using voxel time courses for prediction, we use their principal
components to eliminate collinearity in the predictor set. Following [10], we use functional principal
component analysis. Viviani et al. [10] showed that functional principal components analysis is
more effective than is its ordinary counterpart in recovering the signal of interest in fMRI data, even
if limited or no prior knowledge of the hemodynamic function or experimental design is specified.
In contrast to [10], however, our approach incrementally zooms in on stimuli-related voxel time
courses for dimension reduction (see section 2.5).
Given the set of S voxel time courses represented by the vector of functionals f(t) = [ f1 (t), ..., fS (t)]T ,
functional principal components analysis extracts main modes of variation in f(t). The number
of modes to retain is determined from the proportion of the variance that needs to be explained.
Assuming this is Q, the central concept is that of taking the linear combination
Z
f sq = f s (t)?q (t)dt
(1)
t
where f sq is the principal component score value of voxel time course f s (t) in dimension q. Principal
components ?q (t), q = 1, .., Q are sought for one-by-one by optimizing
?q (t) = max
?
?q (t)
S
1X 2
f
S s=1 sq
where ?q (t) is subject to the following orthonormal constraints
Z
Z
?q (t)2 dt = 1
?k (t)?q (t)dt = 0, k ? q.
t
(2)
(3)
t
The mapping of f s (t) onto the subspace spanned by the first Q principal component curves results in
the vector of scalars f s = [ f s1 , ..., f sQ ]. We define the S ? Q matrix F = [f1 , ..., fS ]T of principal components scores as our predictor data in linear regression. That is, we perform principal component
regression with F as model, allowing to naturally deal with temporal correlations, multicollinearity
and systematic signal variation.
2.2
The Predictand
We represent the stimulus pattern by the functional 1(t), t being the continuous time parameter. We
register 1(t) to each voxel time course f s (t) in order to be able to compare equivalent time points
on stimulus and brain activity data. Alignment reduces to finding the warping function ? s (t) that
produces the warped stimulus function
g s (t) = 1(? s (t)).
(4)
The time warping function ? s (t) is strictly monotonic, differentiable up to a certain order and takes
care of a small shift and nonlinear transformation. A global alignment criteria and least squares
estimation is used:
Z
? s (t) = min
(1(??s (t)) ? f s (t))2 dt.
(5)
?
?s
t
Registration of 1(t) to all voxel time courses S results in predictand data g(t) = [g1 (t), ..., gS (t)]T ,
where g(t) is 1(t) registered onto voxel times-course f (t). Our motivation for using voxel-wise
registration over standard convolution of stimulus 1(t) with the hemodynamic reponse function, is
the large variability in hemodynamic delays across brain regions and subjects. A non -linear warp
of 1(t) does not guarantee an outcome that is associated with brain physiology, however it allows
to capture unknown subtle localized variations in hemodynamic delays across brain regions and
subjects.
2.3
The Model
We employ the predictor data to explain the predictand data within a linear modeling approach, i.e.
our multivariate linear model is defined as
g(t) = F?(t) + ?(t)
(6)
T
with ?(t) = [?1 (t), ..., ?Q (t)] being the Q?1 vector of regression functions. The regression functions
are estimated by least squares minimization such that
Z
??(t) = min (g(t) ? F?? (t))2 dt,
(7)
?? (t) t
under the assumption that the residual functions ?(t) = [?1 (t), ...., ?S (t)]T are independent and normally distributed with zero mean. The estimated regression functions provide the best estimate of
g(t) in least squares sense:
?
g? (t) = F?(t).
(8)
Given a new (sub)set of voxel time courses, prediction of a stimulus pattern now reduces to computing the matrix of principal component scores from this new set and weighting these scores by the
?
estimated regression functions ?(t).
2.4
The Objective
The overall fit of the model to the data is expressed in terms of adjusted R2 statistic. The functional
counterpart of the traditional R2 is computed on the basis of g(t), its mean g? (t) and its estimation
g? (t). For the voxel set S ,
S
X
g? S (t) =
(g s (t) ? g? (t))2
(9)
s=1
g? S (t) =
S
X
(g s (t) ? g? s (t))2
(10)
s=1
are derived, where the first term is the variation of the response about its mean and the second the
error sum of squares function. The adjusted R-square function is then defined as
g? S (t)/S ? Q ? 1
RS (t) = 1 ?
(11)
g? S (t)/S ? 1
where degrees of freedom S ? Q ? 1 and S ? 1 adjust the R-square. Our objective is to find the set
of voxel time courses S defined as
Z
S = max
RS ? (t)dt
(12)
?
S ?S
t
?
where S denotes a subset of the entire collection of voxels time courses S extracted from a single
fMRI scan. That is, we aim at finding spatially distributed voxel responses S that best explain the
naturalistic stimuli, without making any prior assumptions about location and size of voxel subsets.
2.5
The Search
In order to efficiently find the subset of voxels that maximizes Equation (12), we use PopulationBased Incremental Learning (PBIL) [12], which combines Genetic Algorithms with Competitive
Learning. The PBIL algorithm uses a probability vector to explore the space of solutions. It incrementally generates solutions by sampling from that probability vector, evaluates these solutions
and selects promising ones to update the probability vector. Here, at increment i, the probability
vector pi = [pi1 , ..., piS ] is used to generate a population of N solutions Mi = [mi1 , ..., miN ], where
each member is an S-vector of binary values: min = [min1 , ..., minS ]. A value of 1 for mns means
that for solution n the corresponding voxel time course f s (t) is included in the predictor set, while
a value 0 indicates exclusion. Each member min is evaluated in terms of its adjusted R2 value, and
the members with highest values form the joint probability vector p? . A new probability vector is
subsequently constructed for the next generation via competitive learning:
pi+1 = ?pi + (1 ? ?)p? .
(13)
The learning parameter ? controls the search: a low value enables to focus entirely on the most
recent voxel subset while a low value ensures that previously selected voxel subsets are exploited.
In order to ensure spatial coherence and limit computation load, we employ the PBIl algorithm not
on single time courses, but on averages of spatial clusters of voxel time courses. That is, we first
spatially cluster voxel locations as shown in Figure 1, then compute average time course for each
cluster and then explore the averages via PBIL for model building.
2.6
The Prediction
The subset of voxel time courses that results from population based incremental learning defines
the most predictive voxel locations and associated regression functions. Given new and spatially
? = [ f?1 (t), ..., f?S (t)]T , prediction of a stimulus then reduces
normalized fMRI data, represented by f(t)
to computing
?
g? (t) = F? ?(t).
(14)
In here, g? (t) is the vector of predicted stimuli of which the mean is considered to be the sought stimulus. The matrix F? is the principal component scores matrix obtained from performing functional
principal components analysis on subset f?S (t), with S referring to the set of most predictive voxels
as determined by training.
Figure 1: Examples of K-means clustering of voxel locations using Euclidean distance. Left: 1024means clustering output. Right: 512-means clustering output. Different gray values indicate different clusters in a spatially normalized brain atlas.
3
3.1
Experiments and Results
Experiment
Evaluation of our method is done on a data subset from the 2006 Pittsburgh brain activity interpretation competition (PBAIC) [6, 7], involving fMRI scans of three different subjects and two movie
sessions. In each session, a subject viewed a new Home Improvement sitcom movie for approximately 20 minutes. The 20-minute movie contained 5 interruptions where no video was present, only
a white fixation cross on a black background. All three subjects watched the same two movies. The
scans produced volumes with approximately 35,000 brain voxels, each approximately 3.28mm by
3.28mm by 3.5mm, with one volume produced every 1.75 seconds. These scans were preprocessed
(motion correction, slice time correction, linear trend removal) and spatially normalized (non-linear
registration to the Montreal Neurological Institute brain atlas).
After fMRI scanning, the three subjects watched the movie again to rate 30 movie features at time
intervals corresponding to the fMRI scan rate. In our experiments, we focus on the 13 core movie
features: amusement, attention, arousal, body parts, environmental sounds, faces, food, language,
laughter, motion, music, sadness and tools. The real-valued ratings were convolved with a hemodynamic response function (HRF) modeled by two gamma functions, then subjected to voxel-wise
non-linear registration as described in 2.2.
For training and testing our model, we removed parts corresponding with video presentations of a
white fixation cross on a black background. Taking into account the hemodynamic lag, we divided
each fMRI scan and each subject rating into 6 parts corresponding with the movie on parts. On
average each movie part contained 105 discrete measurements. We then functionalized these parts
by fitting a 30 coefficient B-spline to each voxel?s discrete time course. This resulted in 18 data
sets for training (3 subjects ? 6 movie parts) and another 18 for testing. We used movie 1 data for
training and movie 2 data for prediction, and vice versa. We performed data analysis at two levels.
For each feature, first the individual brain scans were analyzed with our method, resulting in a first
sifting of voxels. First-level analysis results for a given feature were then subjected to second level
analysis to identify across subject predictive voxels. Pearson product-moment correlation coefficient
between manual feature rating functions and the automatically predicted feature functions was used
as an evaluation measure.
3.2
Results
All results were obtained with Q = 4 principal component dimensions, learning parameter value
? = 0.6 and K-means clustering with 1024 clusters for all movie features. These values for Q
and ? produced overall highest average cross correlation value in a small parameter optimization
experiment (data not shown here). Little performance differences were seen for various numbers of
dimensions, indicating that the essential information can be captured with as little as 4 dimension.
Significant performance differences across features, however, were observed for different learning
parameter values, indicating considerable variation in brain response to distinct stimuli.
Manual versus Predicted Feature Ratings
Manual
Prediction
0.6
0.5
functions
0.4
0.3
0.2
0.1
0
?0.1
0
0.2
0.4
0.6
arguments
0.8
1
Figure 2: Left: normalized cross correlation values from cross-validation for 13 core movie features.
Right: functionalized subject3 (solid red) and predicted (dotted blue) rating for the language feature
of part 5 of movie 1.
Figure 2 (left) shows the average of 2 ? 18 cross correlation coefficients from cross validation for all
13 movie features. For features faces, language and motion cross correlation values above 0.5 were
obtained, meaning that there is a significant degree of match between the subject ratings and the
predicted ratings. Reasonable predictions were also obtained for features arousal and body parts.
Our results are consistent with top 3 rank entries of 2006 PBAIC in that features faces and language
are reliably predicted. These entries used recurrent neural networks, ridge regression and a dynamic
Gaussian Markov Random Field modeling on the entire test data benchmark, yielding across feature
average cross correlations of: 0.49, 0.49 and 0.47 respectively. Here, the feature average cross
correlation value based on the reduced training data set is 0.36. Note, that in the 2006 competition
our method ranked first in the actor category [6]. We were able to accurately predict which actor the
subjects were seeing purely based on fMRI scans [7].
The best single result, with highest cross correlation value of 0.76, was obtained for feature language
of subject 3 watching part 5 of movie 1. For this feature, first level analysis of each of the 18 training
data sets associated with movie 2 produced a total number of 1738 predictive voxels. In the second
level analysis, these voxels were analyzed again to arrive at a reduced data set of 680 voxels for
building the multivariate functional linear model and determining regression functions ?(t). For
prediction of feature language, corresponding voxel time courses were extracted from the fMRI data
of subject 3 watching movie 1 part 5, and weighted by ?(t). The manual rating of feature language
of movie 1 part 5 by subject 3 and the average of the automatically predicted feature functions are
shown in Figure 2 (right).
Figure 3: Glass view, gray level image with color overlay and surface rendering of 1738 voxels from
first level analysis. Color denotes predictive power and cross hair shows most predictive location.
Figure 3 shows glass view, gray level image with color overlay and surface rendering of the 1738
voxels (approximately 40 clusters) from first level analysis. The cross hair shows the voxel location
in Brodman area 47 that was found to be predictive across most subjects and movie parts: it was
selected in 6 out of 18 training items (see color bar). The predictive locations correspond with
the left and right inferior frontal gyrus, which are known to be involved in language processing.
The distributed nature of these clusters is consistent with earlier findings that processing involved
in language occurs in diffuse brain regions, including primary auditory and visual cortex, frontal
regions in the left and right hemisphere, in homologues regions [13].
As we are dealing with curves, the possibility exists to explore additional data characteristics such as
curvature. We performed an experiment with 1st order derivative functions, rather than the original
functions to exploit potentially available higher order structure. Figure 4 (left) shows the cross
correlation for 1st order derivative functions. The cross correlation values are similar to the ones
shown in Figure 2. The average cross correlation value is slightly better than for the original data:
0.38. This may indicate that higher order structures may contain more predictive power.
In order to get insight in the effect of non-linear warping on prediction performance, we conducted
an experiment in which we used convolutions of the stimulus 1(t) with different forms of a HRF
function modeled by two gamma functions. Various HRF functions were obtained by varing the
delay of response (relative to onset), delay of undershoot (relative to onset), dispersion of response,
dispersion of undershoot, ratio of response to undershoot. To determine g s (t), we convolved 1(t)
with 16 different HRF functions, and selected the convolved one with highest cross correlation with
f s (t) to be g s (t). Hence, we parametrically modeled the HRF and learned its parameters from the
data.
Figure 4 (right) shows the results of the experiments with convolution of stimuli data with HRF
models learned from the data. As can be seen, the cross correlation values are much lower compared
to the values in Figure 2 (left). The average cross correlation value is 0.31. Hence, non-linear
warping of stimulus onto voxel time course significantly enhances the predictive power of our model.
This suggests that non-linear warping is a potential alternative for determining the best possible HRF
estimate to overcome potential negative consequences of assuming HRF consistency across subjects
or brain regions [14].
Figure 4: Left: normalized cross correlation values from cross-validation for 13 core movie features,
using 1st order derivative data. Right: cross correlation values from cross-validation for 13 core
movie features, using HRF convoluted rather than warped stimuli data.
4
Conclusion
Functional data analysis provides the possibility to fully exploit structure in inherently continuous
data such as fMRI. The advantage of functional data analysis for principal component analysis of
fMRI data was recently demonstrated in [10]. Here, we proposed a functional linear model that
treats fMRI and stimuli as stochastic functional measurements. Cast into an incremental pattern
searching framework, the method provides the ability to identify important covariance structure
of spatially distributed brain responses and stimuli, i.e. it directly couples activation across brain
regions rather than first localizing and then integrating function. The method is suited for unbiased
probing of functional characteristics of brain areas as well as for exposing meaningful relations
between complex stimuli and distributed brain responses. This finding is supported by the good
prediction performance of our method in the 2006 PBAIC international competition for brain activity
interpretation. We are currently extending the method with new objective functions, dimension
reduction techniques and multi-target search techniques to cope with multiple (interacting) stimuli.
Also, in this work we made use of spatial clusters at a single hierarchical level. Preliminary results
with hierarchical clustering to arrive at ?supervoxels? at different spatial resolutions, seem to further
improve prediction power.
References
[1] J. Haynes and G. Rees. Decoding mental states from brain activity in humans. Nature Neuroscience, 7(8):523?534, 2006.
[2] J. Haynes and G. Rees. Predicting the orientation of invisible stimuli from activity in human
primary visual cortex. Nature Neuroscience, 7(5):686?691, 2005.
[3] Y. Kamitani and F. Tong. Decoding the visual and subjective contents of the human brain.
Nature Neuroscience, 8(5):679?685, 2005.
[4] S.M. Polyn, V.S. Natu, J.D. Cohen, and K.A. Norman. Category-specific cortical activity
precedes retrieval during memory search. Science, 310(5756):1963?1966, 2005.
[5] T.M. Mitchell, R. Hutchinson, R.S. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman.
Learning to decode cognitive states from brain images. Machine Learning, 57(1-2), 2004.
[6] W. Schneider, A. Bartels, E. Formisano, J. Haxby, R. Goebel, T. Mitchell, T. Nichols, and
G. Siegle. Competition: Inferring experience based cognition from fmri. In Proceedings
Organization of Human Brain Mapping Florence Italy June 15, 2006.
[7] Editorial. What?s on your mind. Nature Neuroscience, 6(8):981, 2006.
[8] K.J. Worsley, J.B. Poline, K.J. Friston, and A.C. Evans. Characterizing the response of pet and
fmri data using multivariate linear models. Neuroimage, 6, 1997.
[9] J. Ramsay and B. Silverman. Functional Data Analysis. Springer-Verlag, 1997.
[10] R. Viviani, G. Grohn, and M. Spitzer. Functional principal component analysis of fmri data.
Human Brain Mapping, 24:109?129, 2005.
[11] D.B. Rowe and R.G. Hoffmann. Multivariate statistical analysis in fmri. IEEE Engineering in
Medicine and Biology, 25:60?64, 2006.
[12] Shumeet Baluja. Population-based incremental learning: A method for integrating genetic
search based function optimization and competitive learning. Technical Report CMU-CS-94163, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, 1994.
[13] M.A. Gernsbacher and M.P. Kaschak. Neuroimaging studies of language production and comprehension. Annual Review of Psychology, 54:91?114, 2003.
[14] D.A. Handwerker, J.M. Ollinger, and M. D?Esposito. Variation of bold hemodynamic response
function across subjects and brain regions and their effects on statistical analysis. NeuroImage,
8(21):1639?1651, 2004.
| 3326 |@word collinearity:1 proportion:1 open:1 pbil:4 r:2 uncovers:1 covariance:1 pbaic:3 solid:1 carry:1 moment:1 reduction:5 score:5 denoting:1 genetic:2 subjective:1 existing:1 activation:1 exposing:1 evans:1 enables:1 haxby:1 designed:1 atlas:2 update:1 alone:1 selected:3 item:1 short:1 core:4 mental:1 provides:3 boosting:1 location:7 ghebreab:2 constructed:1 fixation:2 combine:1 fitting:1 frequently:1 multi:1 brain:41 discretized:1 automatically:2 food:1 little:2 maximizes:1 mass:1 spitzer:1 what:4 substantially:1 finding:4 transformation:1 guarantee:1 temporal:3 every:1 viviani:2 classifier:2 control:1 normally:1 engineering:1 shumeet:1 treat:3 limit:1 consequence:2 path:1 approximately:4 black:2 sadness:1 suggests:1 limited:1 testing:2 differs:1 silverman:1 sq:4 area:3 physiology:1 significantly:1 word:1 integrating:2 seeing:1 naturalistic:9 cannot:1 onto:3 get:1 optimize:1 conventional:1 equivalent:1 demonstrated:1 go:1 attention:1 resolution:2 identifying:1 insight:1 orthonormal:1 spanned:1 population:4 searching:2 variation:7 traditionally:1 increment:1 target:1 decode:1 us:1 pa:1 trend:1 observed:1 min1:1 polyn:1 wang:1 capture:3 region:8 ensures:1 sifting:1 contemporary:1 highest:4 removed:1 covariates:1 dynamic:1 trained:2 predictive:11 purely:2 basis:3 homologues:1 joint:1 various:3 represented:2 distinct:1 effective:1 precedes:1 newman:1 outcome:1 pearson:1 lag:2 valued:1 ability:1 statistic:2 g1:1 advantage:1 differentiable:1 propose:4 reconstruction:2 product:2 remainder:1 relevant:1 competition:5 convoluted:1 cluster:8 extending:1 produce:1 incremental:8 object:1 recurrent:1 montreal:1 recovering:1 predicted:7 c:1 indicate:2 drawback:1 stochastic:3 subsequently:1 human:7 viewing:2 observational:1 f1:2 preliminary:1 comprehension:1 adjusted:3 strictly:1 correction:2 scanner:1 mm:3 considered:1 mapping:4 predict:4 cognition:1 sought:2 estimation:2 currently:1 vice:1 successfully:1 tool:1 weighted:1 minimization:1 gaussian:1 aim:1 rather:4 avoid:1 varying:1 derived:2 focus:2 june:1 improvement:1 rank:1 indicates:1 contrast:1 sense:1 glass:2 niculescu:1 eliminate:1 entire:2 relation:1 bartels:1 selects:1 overall:2 among:1 orientation:2 spatial:6 field:1 amusement:1 sampling:1 haynes:2 biology:1 fmri:35 report:1 stimulus:40 fundamentally:1 spline:1 employ:2 gamma:2 resulted:1 zoom:1 individual:1 freedom:1 organization:1 interest:1 highly:1 possibility:3 evaluation:3 adjust:1 alignment:2 analyzed:2 nl:3 yielding:1 accurate:1 experience:2 euclidean:1 arousal:2 isolating:1 populationbased:1 modeling:3 earlier:1 localizing:1 ordinary:1 subset:8 entry:2 parametrically:1 predictor:7 delay:4 conducted:1 reported:1 scanning:1 hutchinson:1 referring:1 st:3 rees:2 international:2 retain:1 systematic:1 informatics:3 decoding:2 natu:1 again:2 central:1 watching:3 external:1 cognitive:1 warped:2 derivative:3 leading:1 worsley:1 account:2 potential:2 bold:1 coefficient:3 kamitani:1 register:1 onset:2 tion:1 performed:2 observer:1 lab:3 view:2 red:1 competitive:3 participant:2 florence:1 smeulders:2 square:6 variance:1 who:1 percept:1 efficiently:1 correspond:1 identify:4 characteristic:2 accurately:2 produced:5 multicollinearity:1 explain:2 manual:4 evaluates:1 rowe:1 involved:2 naturally:1 associated:3 mi:1 couple:1 sampled:1 auditory:1 mitchell:2 knowledge:1 color:4 subtle:1 higher:2 dt:6 response:21 reponse:1 evaluated:1 hcs:2 done:1 just:1 binocular:1 correlation:17 hand:1 nonlinear:1 lack:1 incrementally:2 defines:1 mode:2 gray:3 reveal:1 building:2 effect:2 concept:1 normalized:5 contain:1 counterpart:2 undershoot:3 hence:3 unbiased:1 norman:1 spatially:9 nichols:1 semantic:1 deal:1 white:2 during:1 adriaans:1 inferior:1 criterion:1 ridge:1 invisible:1 motion:4 meaning:1 wise:3 image:3 recently:2 functional:28 stimulation:1 cohen:1 volume:2 extend:2 interpretation:2 measurement:5 significant:2 goebel:1 versa:1 mellon:1 consistency:1 session:2 analyzer:2 language:10 ramsay:1 actor:2 cortex:2 surface:2 dominant:1 curvature:1 multivariate:13 showed:1 exclusion:1 recent:1 optimizing:1 hemisphere:1 italy:1 certain:3 verlag:1 binary:1 exploited:1 seen:2 captured:1 additional:1 care:2 schneider:1 employed:1 algebraically:1 determine:1 signal:5 full:1 sound:1 multiple:1 reduces:3 technical:1 match:1 cross:23 retrieval:1 divided:1 coded:1 controlled:3 watched:2 prediction:14 involving:1 regression:13 hair:2 circumstance:1 cmu:1 editorial:1 represent:2 background:2 interval:2 subject:25 member:3 seem:1 presence:1 rendering:2 affect:1 fit:2 psychology:1 inner:1 shift:1 whether:2 f:2 generally:1 clear:1 netherlands:3 category:3 reduced:2 generate:1 gyrus:1 overlay:2 dotted:1 estimated:3 neuroscience:4 instantly:1 blue:1 discrete:3 carnegie:1 four:1 localize:2 preprocessed:1 registration:5 imaging:2 sum:1 powerful:1 extends:1 arrive:3 reasonable:1 home:1 coherence:1 entirely:1 esposito:1 g:1 activity:6 annual:1 vectorial:1 constraint:1 your:1 diffuse:1 generates:1 argument:1 min:6 pi1:1 attempting:2 performing:1 department:1 combination:1 supervoxels:1 across:11 slightly:1 increasingly:1 making:1 s1:1 explained:1 taken:1 equation:1 previously:1 mind:1 subjected:2 available:1 hierarchical:2 alternative:1 convolved:3 original:2 denotes:4 clustering:5 ensure:1 sitcom:1 top:1 music:1 medicine:1 exploit:3 giving:1 classical:1 warping:5 objective:3 question:1 hoffmann:1 occurs:1 primary:2 traditional:2 interruption:1 enhances:1 subspace:1 distance:1 capacity:1 extent:1 pet:1 assuming:2 modeled:3 ratio:1 neuroimaging:4 potentially:1 negative:1 rise:1 design:2 reliably:1 unknown:3 perform:1 allowing:1 convolution:3 dispersion:2 markov:1 benchmark:2 immediate:1 variability:1 interacting:1 rating:9 inverting:1 cast:2 required:1 specified:1 registered:1 learned:2 beyond:1 able:2 bar:1 pattern:10 max:2 including:1 video:2 memory:1 power:7 treated:1 ranked:1 friston:1 predicting:4 residual:1 advanced:1 mn:1 improve:1 movie:28 mi1:1 extract:1 func:1 prior:2 understanding:1 voxels:11 removal:1 review:1 determining:2 relative:2 fully:1 generation:1 limitation:1 versus:1 localized:2 validation:4 degree:2 sufficient:1 consistent:2 pi:4 production:1 course:20 poline:1 supported:1 warp:1 institute:4 face:5 taking:2 formisano:1 rivalry:1 characterizing:1 distributed:8 benefit:1 overcome:2 dimension:6 curve:2 slice:1 cortical:1 collection:1 made:1 voxel:30 cope:1 functionals:1 dealing:1 global:1 correlating:1 pittsburgh:2 spatio:1 search:6 continuous:7 reality:1 promising:1 nature:5 inherently:1 complex:5 domain:2 protocol:1 uva:3 main:1 motivation:1 noise:1 facilitating:1 body:2 screen:1 probing:1 tong:1 sub:1 position:2 laughter:1 pereira:1 inferring:1 neuroimage:2 hrf:9 third:1 weighting:1 varing:1 minute:2 load:1 specific:3 r2:4 essential:1 exists:1 easier:1 suited:1 likely:1 univariate:2 explore:3 visual:3 amsterdam:3 expressed:1 contained:2 neurological:1 scalar:1 monotonic:1 springer:1 environmental:1 extracted:2 viewed:1 presentation:1 content:2 considerable:1 included:1 specifically:1 determined:2 baluja:1 principal:16 total:1 discriminate:1 experimental:1 meaningful:1 indicating:2 scan:9 frontal:2 evaluate:1 hemodynamic:8 |
2,566 | 3,327 | Rapid Inference on a Novel AND/OR graph for
Object Detection, Segmentation and Parsing
Yuanhao Chen
Department of Automation
University of Science and Technology of China
yhchen4@ustc.edu.cn
Chenxi Lin
Microsoft Research Asia
chenxil@microsoft.com
Long (Leo) Zhu
Department of Statistics
University of California, Los Angeles
lzhu@stat.ucla.edu
Alan Yuille
Department of Statistics, Psychology and Computer Science
University of California, Los Angeles
yuille@stat.ucla.edu
Hongjiang Zhang
Microsoft Advanced Technology Center
hjzhang@microsoft.com
Abstract
In this paper we formulate a novel AND/OR graph representation capable of describing the different configurations of deformable articulated objects such as
horses. The representation makes use of the summarization principle so that
lower level nodes in the graph only pass on summary statistics to the higher
level nodes. The probability distributions are invariant to position, orientation,
and scale. We develop a novel inference algorithm that combined a bottom-up
process for proposing configurations for horses together with a top-down process
for refining and validating these proposals. The strategy of surround suppression is applied to ensure that the inference time is polynomial in the size of input
data. The algorithm was applied to the tasks of detecting, segmenting and parsing
horses. We demonstrate that the algorithm is fast and comparable with the state of
the art approaches.
1 Introduction
Most problems in machine intelligence can be formulated as probabilistic inference using probabilistic models defined on structured knowledge representations. Important examples include stochastic
grammars [11] and, in particular, AND/OR graphs [8],[4],[10]. In practice, the nature of the representations is constrained by the types of inference algorithms which are available. For example,
probabilistic context free grammars for natural language processing have a natural one-dimensional
structure which makes it practical to use dynamic programming (DP) for inference [11]. But DP can
not be directly applied to vision problems which lack this one-dimensional structure.
In this paper, we address the problem of detecting, segmenting and parsing articulated deformable
objects, such as horses, in cluttered backgrounds. Formulating these tasks as statistical inference
requires a representation that can deal with all the different possible configurations of the object
including: (a) the different appearances of sub-configurations (e.g. the variable number of visible
legs of a horse) and (b) the unknown location, size, and orientation of the object. In addition, we
must specify a fast inference algorithm that can rapidly search over all the possible configurations
of the object.
1
We first specify a novel AND/OR graph representation that efficiently allows for all the different
configurations of an articulated deformable object (i.e. only a small number of nodes are required).
The design of this graph uses the principle of summarization, so that lower level nodes in the graph
only pass on summary statistics (abstract) to the higher level nodes. More precisely, the nodes of
the AND/OR graph specify the position, orientation and scale of sub-configurations of the object
(together with an index variable which specifies which sub-configurations of the object are present).
The probability distribution defined on this representation obeys the Markov condition. It is designed
to be invariant to the position, pose, and size of the object. In this paper, the representation and
probability distributions are specified by hand.
We next describe an algorithm for performing inference over this representation. This is a challenging task since the space of possible configurations is enormous and there is no natural ordering to
enable dynamic programming. Our algorithm combines a bottom-up process that makes proposals for the possible configurations of the object followed by a top-down process that refines and
validates (or rejects) these proposals. The bottom-up process is based on the principle of compositionality, where we combine proposals for sub-configurations together to form proposals for bigger
configurations. To avoid a combinational explosion of proposals, we prune out proposals in two
ways: (i) removing proposals whose goodness of fit is poor, and (ii) performing surround suppression to represent local clusters of proposals by a single max-proposal. The top-down process refines
and validates (or rejects) proposals for the entire configuration by allowing max-proposals to be replaced by other proposals from their local clusters if these leads to a better overall fit. In addition,
the top-down process estimates the boundary of the object and performs segmentation. Surround
suppression ensures that the computional complexity of the inference algorithm is polynomial in the
size of image (input data).
The algorithm was tested for the task of detecting horses in cluttered backgrounds, using a standard
dataset [2]. The input to the algorithm are the set of oriented edgelets detected in the image. The
results show that the algorithm is very fast (approximately 13 seconds) for detecting, parsing, and
segmenting the horses. Detection and segmentation are tested on 328 images and we obtain very
good results using performance measures compared to ground truth. Parsing is tested on 100 images
and we also obtain very good performance results (there are fewer test images for this task because
it is harder to obtain datasets with ground truth parsing).
2
Background
Detection, segmentation and parsing are all challenging problems. Most computer vision systems
only address one of these tasks. There has been influential work on detection [6], [9] and on the related problem of registration [5],[1]. Work on segmentation includes [12], [13], [3], [7], [14], [18],
[17] and [16]. Much of this work is formulated, or can be reformulated, in terms of probabilistic
inference. But the representations are fixed graph structures defined at a single scale. This restricted
choice of representation enables the use of standard inference algorithms (e.g. the hungarian algorithm, belief propagation) but it puts limitations on the types of tasks that can be addressed (e.g. it
makes parsing impossible), the number of different object configurations that can be addressed, and
on the overall performance of the systems.
In the broader context of machine learning, there has been a growing use of probabilistic models
defined over variable graph structures. Important examples include stochastic grammars which are
particularly effective for natural language processing [11]. In particular, vision researchers have
advocated the use of probability models defined over AND/OR graphs [4],[10] where the OR nodes
enable the graph to have multiple structures. Similar AND/OR graphs have been used in other
machine learning problems [8].
But the representational power of AND/OR graphs comes at the price of increased computational
demand for performing inference (or learning). For one dimensional problems, such as natural
language processing, this can be handled by dynamic programming. But computation becomes
considerably harder for vision problems and it is not clear how to efficiently search over the large
number of configurations of an AND/OR graph. The inference problem simplifies significantly if the
OR nodes are restricted to lie at certain levels of the graph (e.g. [15], [20]), but these simplifications
are not suited to the problem we are addressing.
2
3
3.1
The AND/OR Graph Representation
The topological structure of the AND/OR graph
The structure of an AND/OR graph is represented by a graph G = (V, E) where V and E denote the
set of vertices and edges respectively. The vertex set V contains three types of nodes,?OR?,?AND?
and ?LEAF? nodes which are depicted in figure (1) by circles, rectangles and triangles respectively.
These nodes have attributes including position, scale, and orientation. The edge set E contains
vertical edges defining the topological structure and horizontal edges defining spatial constraints on
the node attributes. For each node ? ? V , the set of its child nodes is defined by T? .
The directed (vertical) edges connect nodes at successive levels of the tree. They connect: (a) the
AND nodes to the OR nodes, (b) the OR nodes to the AND nodes, and (c) the AND nodes to the
LEAF nodes. The LEAF nodes correspond directly to points in the image. Connection types (a) and
(c) have fixed parent-child relationships, but type (b) has switchable parent-child relationship (i.e.
the parent is connected to only one of its children, and this connection can switch). The horizontal
edges only appear relating the children of the AND nodes. They correspond to Markov Random
Fields (MRF?s) and define spatial constraints on the node attributes. These constraints are defined
to be invariant to translation, rotation, and scaling of the attributes of the children.
...
...
...
...
Figure 1: The AND/OR representation of the object.
The AND/OR graph we use in this paper is represented more visually in figure (2). The top node
shows all the possible configurations of the horse (there are 40 in this paper). These configurations
are obtained by AND-ing sub-configurations corresponding to the head, back, lower torso, and back
legs of the horse (see circular nodes in the second row). Each of these sub-configurations has different aspects as illustrated by the AND nodes (rectangles in the third row). These sub-configurations,
in turn, are composed by AND-ing more elementary configurations (see fourth row) which can have
different aspects (see fifth row). (The topological structure of this representation is specified by the
authors. Future work will attempt to learn it from examples).
3.2 The state variables defined on the AND/OR graph
A configuration of the AND/OR graph is an assignment of state variables z = {z? } with z? =
(x? , y? , ?? , s? , t? ) to each node ?, where (x, y), ? and s denote image position, orientation, and
scale respectively. The t = {t? } variable defines the specific topology of the graph and t? ? T? .
More precisely, t? defines the vertical parent-child relations by indexing the children of node ?. t?
is fixed and t? = T? if ? is an AND node (because the node is always connected to all its children),
...
...
...
...
Figure 2:
The AND/OR graph is an efficient way to representation different appearances of an object. The bottom level of the graph
indicates points in the image. The higher levels indicating combinations of elementary configurations. The graph that we used contains eight
levels (three lower levels are not depicted here due to lack of space).
3
but t? is a variable for an OR node ? (to enable sub-configurations to switch their appearances), see
figure (2). We use the notation Z? to denote the state z? at node ?, together with the states of all the
descendent nodes of ? (i.e. the children of ?, their children, and so on). The input to the graph is the
data d = {d? } defined on the image lattice (at the lowest level of the hierarchy).
We define V LEAF (t), V AN D (t),V OR (t) to be the set of LEAF, AND, and OR nodes which are
active for a specific choice of the topology t. These sets can be computed recursively from the root
node, see figure (2). The AND nodes in the second row (i.e. the second highest level of the graph)
are always activated and so are the OR nodes in the third row. The AND nodes activated in the
fourth row, and their OR node children in the fifth row, are specified by the t variables assigned to
their parent OR nodes. This process repeats till we reach the lowest level of the graph.
A novel feature of this AND/OR representation is that the node variables are the same at all levels
of the hierarchy. We call this the summarization principle. It means that the state of an AND node
will be a simple deterministic function of the state variables of the children (see section (3.3)). This
differs from other AND/OR graphs [4],[10] where the node variables at different levels of the graph
may be at different levels of abstraction. The use of the summarization principle enables us to define
a successful inference algorithm.
3.3
The probability distribution for the AND/OR graph
The joint distribution on the states and the data is given by:
1
P (z, d) = exp{?E(d, z) ? Eh (z) ? Ev (z))}.
Z
where d is the input data and Z is the partition function.
The data term E(d, z) is given by:
E(d, z) =
X
f (d? , z? ),
(1)
(2)
??V LEAF (t)
where V LEAF (t) is the set of the LEAF nodes and f (., .) is (negative) logarithm of Gaussian defined
over grey-scale intensity gradient (i.e. magnitude and orientation). It encourages large intensity
gradients in the image at locations of the nodes with the orientation roughly aligned to the orientation
of the boundary.
The next two terms make use of the hierarchical structure. The horizontal component of the hierarchical shape prior is used to impose the horizontal connections at a range of scales and defined
by
X
X
Eh (z) =
g(z? , z? , z? ),
(3)
??V AN D (t) (?,?,? )?t?
where V AN D (t) is the set of AND nodes whose children are OR nodes and g(z? , z? , z? ) is a (negative) logarithm of Gaussian distribution defined on the invariant shape vector l(z? , z? , z? ) constructed from triple of childs nodes (z? , z? , z? ) [20]. (This shape vector depends only on variables
of the triple, such as the internal angles, that are invariant to the translation, rotation, and scaling
of the triple. This ensures that the full probability distribution is also invariant to these transformations). The summation is over all triples formed by the child nodes of each parent, see figures (2).
(Each node has at most four children, which restricts the set of triplets). The parameters of the
Gaussian are fixed.
The vertical component Ev (z) is used to hold the structure together by relating the state of the parent
nodes to the state of its children. Ev (z) is divided into three vertical energy terms denoted by Eva (z),
Evb (z) and Evc (z) which correspond to type(a), type(b) and type(c) vertical connections respectively.
Hence we have
Ev (z) = Eva (z) + Evb (z) + Evc (z)
(4)
Eva (z) specifies the coupling from the AND node to the OR node. The state of the parent node is
determined precisely by the states of the child nodes. This is defined by:
X
Eva (z) =
h(z? ; {z? s.t.? ? t? }),
(5)
??V AN D (t)
4
where h(., .) = 0 if the average orientations and positions of the child nodes are equal to the orientation and position of the parent node (i.e. the vertical constraints are ?hard?). If they are not
consistent, then h(., .) = ?, where ? is a large positive number.
Evb (z) accounts for the probability of the assignments of the connections from OR nodes to AND
nodes.
X
Evb (z) =
?? (t? ),
(6)
??V OR (t)
where ?? () is the potential function which encodes the weights of the assignments determined by
t? .
The energy term Evc (z) defines the connection from the lowest AND nodes to the LEAF nodes. This
is similar to the definition of Eva (z), and Evc (z) is given by:
X
Evc (z) =
h(z? ; zt? ),
(7)
t? ?V LEAF (t)
where h(., .) = 0 if the orientation and position of the child (LEAF) node is equal to the orientation
and position of the parent (AND) node. If they are not consistent, then h(., .) = ?.
Finally, we can compute the energy of the sub-tree for a particular node ? as root node. The sub-tree
energy is useful when performing inference, see section (4). This is computed by summation over
all the potential functions associating to the node ? and its descendants. This energy is defined by:
E? (Z? ) = E(d, z) + Eh (z) + Ev (z).
LEAF
AN D
where z ? Z? and V
(t), V
(t), V
set of the node ? and its descendants.
OR
(8)
(t) in the summation of each term are defined in the
Now we have specified a complete probability distribution for the graph. But this model is unable
to do segmentation (since it has a limited number of nodes at the lowest level). To obtain a closed
boundary based on the states of the leaf nodes, an extra energy term E0 (d, z) at level l = 0 must
be added to the exponent in equation (1). E0 (d, z) is constructed similarly to that of Coughlan et al
[6]. It is of form:
X
X
E0 (z) =
{f (d? , z? ) + g(z? , z?0 )},
(9)
??V LEAF ??C(?,? 0 )
0
where ? and ? are neighbors at level 1, C(?, ? 0 ) is a curve connecting ? to ? 0 containing a fixed
number of points, and ?0 is the neighbor of ?. The function g(., .) takes the (negative) logarithm of
Gaussian form to define the prior on the orientation and scale. This energy term ensures that the leaf
nodes are connected by a closed boundary which is used for segmentation.
4 Inference: Bottom-up and Top-down Processing
The task of the inference algorithm is to find a maximum a posteriori estimate of the state variables
z:
z ? = arg max p(z|d) = arg max p(d|z)p(z),
(10)
where p(d|z)p(z) = p(d, z) is defined in equation (1).
The inference algorithm (see the pseudo code in figure (3)) contains a compositional bottom-up
stage which makes proposals for the node variables in the tree. This is followed by a top-down stage
which refines and validates the proposals. We use the following notation. Each node ? l at level l
l
has a set of proposals {P?,a
} where a indexes the proposals (see table (2) for the typical number
l
of proposals). There are also max-proposals {M P?,a
}, indexed by a, each associated with a local
l
cluster {CL?,a } of proposals (see table (2) for the typical number of max-proposals). Each proposal,
l
or max-proposal, is described by a state vector {z?,a
: a = 1, ..., M?l }, the state vectors for it and its
l
l
descendants {Z?,a
: a = 1, ..., M?l }, and an energy function score {E?l (Z?,a
) : a = 1, ..., M?l }.
We obtain the proposals by a bottom-up strategy starting at level l = 2 (AND node) of the tree. For
2
a node ? 2 we define windows {W?,a
} in space, orientation, and scale. We exhaustively search for
2
all configurations within this window which have a score (goodness of fit criterion) E?2 (P?,a
) < K2 ,
2
where K2 is a fixed threshold. For each window W?,a , we select the configuration with smallest
5
? Bottom-Up(M P 1 )
Loop : l = 2 to L, for each node ? at level l
? IF ? is an OR node
l
1. Union: {M P?,b
}=
S
l?1
??T? ,a=1,...,M?
l?1
M P?,a
? IF ? is an AND node
l
1. Composition: {P?,b
}=?
l?1
l?1 M P?,a
??T? ,a=1,...,M?
l
l
l
2. Pruning: {P?,a
} = {P?,a
|E? (P?,a
) < Kl }
l
l
3. Surround Suppression: {(M P?,a
, CLl?,a )} = SurroundSuppression({P?,a
}, ?W ) where ?W is the size
of the window W?l defined in space, orientation, and scale.
? Top-Down(M P L , CLL ):
L
M P ? = arg mina=1,...,M L ,P? =M P L ChangeP roposal(P? , M P?,a
, CLL
?,a )
?
?,a
l
? ChangeP roposal(P? , M P?,a
, CLl?,a )
? IF ? is an OR node
1. ChangeP roposal(P? , M Ptl?1
, CLl?1
t? ,a )
? ,a
? IF ? is an AND node
l
1. P? = P? ? M P?,a
2. P? = arg min l
P
?,a0
?CLl?,a
l
l
?
?
E? (P?,a
0 ? P ) + E0 (P?,a0 ? P ) (E0 () is obtained by dynamic programming)
3. P? = P? ? P?
l?1
4. Loop: for each ? ? T? , ? 3 V LEAF and b s.t. M P?,b
? P?
l?1
? ChangeP roposal(P? , M P?,b
, CLl?1
)
?,b
? Return P? and its score E? (P? ) + E0 (P? )
Figure 3: Bottom-up and Top-down Processing. ? denotes the operation of combining two proposals. ? denotes the operation of removing
a part from a proposal.
2
and store the remaining proposals below threshold in the associated
score to be the proposal M P?,a
2
cluster CL?,a . This window enforces surround suppression which performs clustering to keep the
proposal with the maximum score in any local window. Surround suppression grantees the number
of the remaining proposals at each level is proportional to the size of image (input data). This strategy ensures that we do not obtain too many proposals in the hierarchy and avoid a combinatorial
explosion of proposals. We will analyze this property empirically in section 6. The procedure is
l+1
repeated as we go up the hierarchy. Each parent node ? l+1 produces proposals {P?,a
}, and associl+1
ated clusters {CL?,a }, by combining the proposals from its children. All proposals are required to
have scores E?l+1 (Z?l+1 ) < Kl+1 , where Kl is a threshold.
The bottom-up process provides us with a set of proposals at the root (top) node. These proposals give a set of state vectors for the hierarchy for all nodes down to level l = 1, {Z?L0 ,a : a =
1, ..., M0L }, where ?0 denotes the root node. In the top-down processing, for each proposal a at the
root node, we fix the state vectors of Z?L0 ,a and obtain the state of the level l = 0 variables (on the
image lattice) by minimizing E? (Z?L0 ,a )+E0 (Z?L0 ,a ) which is performed by dynamic programming,
with the constraint that the level l = 1 nodes are fixed and the boundary contour must pass through
them. The output of dynamic programming is a dense boundary contour. Next we refine the solution for each proposal at the root node by recursively changing the parts of the proposal. This is
performed using the clusters associated with the proposals at each node. Each element of the cluster
is an alternative proposal for the state of that node. The use of these clusters enables us to perform a
set of transformations which may give a lower-energy configuration. The basic moves are to change
the state vector of a node in the tree hierarchy to another state in the same proposal cluster, and
then to determine the zeroth level nodes ? for the appropriate segment of the contour ? by dynamic
programming. Change to the state vectors at high levels of the hierarchy will cause large changes to
the boundary contour. Changes at the lower levels will only cause small changes. The procedure is
repeated as we examine each node in the hierarchy recursively.
5
Complexity of Representation and Inference
We now quantify the representational power of the AND/OR graph and the complexity of the inference algorithm. These complexity measures depend on the following quantities: (i) the number M
of AND nodes connecting to OR nodes, (ii) the maximum number K of children of AND nodes (we
restrict K ? 4), (iii) the maximum number W of children of OR nodes, (iv) the number h of levels
6
Table 1: Performance for parsing, segmentation and detection. The table compares the results for the hierarchial model (without OR nodes)
and AND/OR graph with two inference algorithms, i.e. (a) bottom-up only. (b) bottom-up and top-down.
Model
Hierarchical Model (a)
Hierarchical Model (b)
And/Or Graph (a)
And/Or Graph (b)
[16]
Testing Size
328
328
328
328
172
Parsing
18.7
17.6
13.2
12.5
-
Segmentation
81.3% / 73.4%
83.3% / 74.2%
81.3% / 74.5%
87.1% / 75.8%
86.2% / 75.0%
Detection
86.0 (282 / 328)
88.4 (290 / 328)
84.5 (277 / 328)
91.2 (299 / 328)
-
Time
3.1s
6.1s
4.5s
13.2s
-
Table 2: Complexity Analysis.
Level
8
6
4
2
Nodes
1
8
27
68
Aspects
12
1.5
1
1
Max-Proposals
11.1
30.6
285.1
172.2
Proposals
2058.8
268.9
1541.5
1180.7
Time
1.206s
1.338s
1.631s
0.351s
containing OR nodes with more than one child node, (v) the number S of clusters for AND nodes
(recall that the cluster is defined over image position, orientation and scale). In the experiments
reported in this paper we have K = 4, W = 3, h = 2, M = 36. The number of proposals is linearly
proportional to the size of the image.
The representational power is given by the number of different topological configurations of the
h
AND/OR graph. It is straightforward to show that this is bounded above by W K . In our experiments, the number of different topological configurations is 40. The complexity of our algorithm can
also be bounded above by M ? W K ? S K . This shows that the algorithm speed is polynomial in W
and S (and hence in the image size). The complexity for our experiments is reported in section (6).
6
Results
The experimental results are obtained on 328 horse images [2] for detection and segmentation. We
use 100 images for parsing (which requires more work to get groundtruth). The AND/OR model has
40 possible configurations. Some typical parsed and segmentation results are shown in figure (4).
In table (1) we compare the performances between the AND/OR graph with 40 configurations and a
simple hierarchical model with only one configuration (each OR node has one child node). Column
3 gives the parsing accuracy ? the average error of the position of leaf node is 10 pixels. Column
4 gives the precision and recall at the pixel level respectively (i.e. is the pixel inside the object or
not). Column 5 quantifies the detection. We rate detection as a success if the area of intersection
of the detected object region and the true object region is greater than half the area of the union of
these regions. The last column shows the average time taken for one image. The AND/OR graph
outperforms the simple hierarchical model in all tasks with two times more cost. Hierarchical model
is only capable of locating the main body while AND/OR graph catches more details like legs,
heads under different poses. Compared to [16] where the training and evaluation are performed with
half of the data set, our method (evaluated on the whole data set) achieves better performance of
segmentation with simpler feature. (Their method is unable to do parsing and detection.)
Table (2) shows the complexity properties of the algorithm. We described the AND levels only (the
model has 8 levels). The computation for the OR-nodes is almost instantaneous (you just need to
list the proposals from all its children AND nodes) so we do not include it. Column 2 gives the
number of nodes at each level. Column 3 states the average number of aspects 1 of the AND nodes
at each level. Column 4 states the average number of max-proposals for each node. Column 5 gives
the average number of proposals. Column 6 gives the time. Observe that the number of proposals
increases by an order of magnitude from level 6 to level 8. This is mostly due to the similar increase
in the number of aspects (the more the number of aspects, the more the number of proposals needed
to cover them). But surround suppression is capable of reducing the number of proposals greatly
(compare the numbers of Max-proposals and proposals in Table (2)).
S
1
The definition of aspects. Let
Q AND node ? have children OR nodes {?i : i ? t? }. This gives a set of grandchildren AND nodes
t?i . The aspect of ? is
|t?i |. The aspect of an AND node is an important concept. When passing up the proposals to an
i?t?
i?t?
AND node we must take into account the number of aspects of this node. We can, in theory, have proposals for all possible aspects. The notion
of aspects only goes down two levels.
7
Figure 4:
The parsed results. From left to right: original image, edge map, parsed result and segmentation. In the edge map, one can
observe that some parts are missing or very ambiguous with low level cues. The colored dots correspond to the leaf nodes of the object.
7
Conclusion
We formulated a novel AND/OR graph representation capable of describing the different configurations of deformable articulated objects. The representation makes use of the summarization principle. We developed a novel inference algorithm that combined a bottom-up process for proposing
configurations for horses together with a top-down process for refining and validating these proposals. Surround suppression ensures that the inference time is polynomial in the size of image. We
demonstrated that the algorithm was fast and effective as evaluated by performance measures on a
large dataset.
8
Acknowledgments
This research was supported by NSF grant 0413214 and the W.M. Keck foundation.
References
[1] S. Belongie, J. Malik, and J. Puzicha. Shape Matching and Object Recognition Using Shape Contexts.
PAMI, 2002.
[2] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. ECCV, 2002.
[3] E. Borenstein and J. Malik. Shape Guided object segmentation. CVPR 06
[4] H. Chen, Z.J. Xu, Z.Q. Liu, and S.C. Zhu. Composite Templates for Cloth Modeling and Sketching.
CVPR, 2006.
[5] H. Chui and A. Rangarajan, A New Algorithm for Non-Rigid Point Matching. CVPR, 2000.
[6] J.M. Coughlan, and S. Ferreira. Finding Deformable Shapes using Loopy Belief Propagation. ECCV,
2002.
[7] T. Cour and J. Shi. Recognizing Objects by Piecing Together the Segmentation Puzzle. CVPR, 2007.
[8] H. Dechter and Robert Mateescu. AND/OR Search Spaces for Graphical Models. In Artificial Intelligence, 2006.
[9] R. Fergus, P. Perona and A. Zisserman. Object Class Recognition by Unsupervised Scale-Invariant Learning. CVPR, 2003.
[10] Y. Jin, S. Geman. Context and Hierarchy in a Probabilistic Image Model. CVPR 2006.
[11] D. Klein and C. Manning. Natural Language Grammar Induction Using a Constituent-Context Model.
NIPS, 2001.
[12] M. P. Kumar, P. H. S. Torr and A. Zisserman. OBJ CUT, CVPR, 2005.
[13] B. Leibe, A. Leonardis and B. Schiele. Combined object categorization and segmentation with an implicit
shape model. ECCV, 2004.
[14] A. Levin and Y. Weiss. Learning to Combine Bottom-up and Top-down Segmentation. ECCV, 2006
[15] M. Meila and M. I. Jordan. Mixture of Trees: Learning with mixtures of trees. Journal of Machine
Learning Research, 1, 1-48, 2000.
[16] X. Ren, C. Fowlkes, and J. Malik, Cue integration in figure/ground labeling. NIPS, 2005.
[17] P. Srinivasan and J. Shi. Bottom-up Recognition and Parsing of the Human Body. CVPR, 2007.
[18] J. Winn and N. Jojic. LOCUS: Learning Object Classes with Unsupervised Segmentation. ICCV, 2005
[19] L. Zhu, and A. Yuille. A Hierarchical Compositional System for Rapid Object Detection. NIPS 2006.
[20] L. Zhu, Y. Chen, and A. Yuille. Unsupervised Learning of a Probabilistic Grammar for Object Detection
and Parsing. NIPS 2007.
8
| 3327 |@word polynomial:4 grey:1 harder:2 recursively:3 configuration:34 contains:4 score:6 liu:1 outperforms:1 com:2 must:4 parsing:15 dechter:1 visible:1 partition:1 refines:3 shape:8 enables:3 designed:1 intelligence:2 fewer:1 leaf:18 half:2 cue:2 coughlan:2 colored:1 detecting:4 provides:1 node:116 location:2 successive:1 simpler:1 zhang:1 constructed:2 descendant:3 combine:3 combinational:1 yhchen4:1 inside:1 rapid:2 roughly:1 examine:1 growing:1 window:6 becomes:1 notation:2 bounded:2 lowest:4 developed:1 proposing:2 finding:1 transformation:2 pseudo:1 ptl:1 ferreira:1 k2:2 grant:1 appear:1 segmenting:3 positive:1 local:4 approximately:1 pami:1 zeroth:1 china:1 challenging:2 limited:1 range:1 obeys:1 directed:1 practical:1 acknowledgment:1 enforces:1 testing:1 practice:1 union:2 differs:1 procedure:2 area:2 reject:2 significantly:1 matching:2 composite:1 get:1 put:1 context:5 impossible:1 deterministic:1 map:2 center:1 missing:1 demonstrated:1 go:2 straightforward:1 starting:1 cluttered:2 shi:2 formulate:1 notion:1 hierarchy:9 programming:7 us:1 element:1 recognition:3 particularly:1 cut:1 geman:1 bottom:15 region:3 ensures:5 connected:3 eva:5 ordering:1 highest:1 complexity:8 schiele:1 dynamic:7 exhaustively:1 depend:1 segment:1 evc:5 yuille:4 triangle:1 joint:1 represented:2 leo:1 articulated:4 fast:4 effective:2 describe:1 detected:2 artificial:1 horse:11 labeling:1 whose:2 cvpr:8 grammar:5 statistic:4 validates:3 aligned:1 loop:2 combining:2 rapidly:1 till:1 deformable:5 representational:3 constituent:1 los:2 parent:11 cluster:11 keck:1 rangarajan:1 cour:1 produce:1 categorization:1 object:28 coupling:1 develop:1 stat:2 pose:2 advocated:1 hungarian:1 come:1 quantify:1 guided:1 attribute:4 stochastic:2 human:1 enable:3 piecing:1 fix:1 elementary:2 summation:3 hold:1 ground:3 visually:1 exp:1 puzzle:1 achieves:1 smallest:1 combinatorial:1 always:2 gaussian:4 avoid:2 broader:1 l0:4 refining:2 indicates:1 greatly:1 suppression:8 posteriori:1 inference:24 abstraction:1 cloth:1 rigid:1 entire:1 a0:2 perona:1 relation:1 pixel:3 overall:2 arg:4 orientation:16 denoted:1 exponent:1 art:1 constrained:1 spatial:2 integration:1 field:1 equal:2 unsupervised:3 future:1 oriented:1 composed:1 replaced:1 microsoft:4 attempt:1 detection:12 circular:1 evaluation:1 mixture:2 activated:2 edge:8 capable:4 explosion:2 tree:8 indexed:1 iv:1 logarithm:3 circle:1 e0:7 increased:1 column:9 modeling:1 cover:1 goodness:2 assignment:3 lattice:2 evb:4 cost:1 loopy:1 vertex:2 addressing:1 recognizing:1 successful:1 levin:1 too:1 reported:2 connect:2 considerably:1 combined:3 cll:7 probabilistic:7 together:7 connecting:2 sketching:1 containing:2 return:1 ullman:1 account:2 potential:2 automation:1 includes:1 descendent:1 depends:1 ated:1 performed:3 root:6 closed:2 analyze:1 yuanhao:1 formed:1 accuracy:1 efficiently:2 correspond:4 ren:1 researcher:1 reach:1 definition:2 energy:9 associated:3 dataset:2 recall:2 knowledge:1 torso:1 segmentation:19 back:2 higher:3 asia:1 specify:3 zisserman:2 wei:1 evaluated:2 just:1 stage:2 implicit:1 hand:1 horizontal:4 lack:2 propagation:2 defines:3 concept:1 true:1 hence:2 assigned:1 jojic:1 illustrated:1 deal:1 encourages:1 ambiguous:1 criterion:1 mina:1 complete:1 demonstrate:1 performs:2 image:21 instantaneous:1 novel:7 rotation:2 empirically:1 relating:2 composition:1 surround:8 meila:1 similarly:1 language:4 dot:1 store:1 certain:1 success:1 greater:1 impose:1 prune:1 determine:1 ii:2 multiple:1 full:1 ing:2 alan:1 long:1 lin:1 divided:1 bigger:1 mrf:1 basic:1 vision:4 represent:1 proposal:57 background:3 addition:2 addressed:2 winn:1 extra:1 hongjiang:1 borenstein:2 validating:2 obj:1 call:1 jordan:1 iii:1 edgelets:1 switch:2 fit:3 psychology:1 topology:2 associating:1 restrict:1 simplifies:1 cn:1 angeles:2 handled:1 locating:1 reformulated:1 passing:1 cause:2 compositional:2 useful:1 clear:1 specifies:2 restricts:1 nsf:1 klein:1 srinivasan:1 four:1 threshold:3 enormous:1 changing:1 lzhu:1 registration:1 rectangle:2 graph:43 angle:1 fourth:2 you:1 almost:1 groundtruth:1 scaling:2 grandchild:1 comparable:1 followed:2 simplification:1 topological:5 refine:1 precisely:3 constraint:5 encodes:1 ucla:2 chui:1 aspect:12 speed:1 min:1 formulating:1 kumar:1 performing:4 department:3 structured:1 influential:1 combination:1 poor:1 manning:1 leg:3 invariant:7 restricted:2 indexing:1 iccv:1 taken:1 equation:2 describing:2 turn:1 needed:1 locus:1 available:1 operation:2 eight:1 observe:2 hierarchical:8 leibe:1 appropriate:1 fowlkes:1 alternative:1 original:1 top:15 denotes:3 ensure:1 include:3 remaining:2 clustering:1 graphical:1 hierarchial:1 parsed:3 move:1 malik:3 added:1 quantity:1 strategy:3 gradient:2 dp:2 unable:2 induction:1 code:1 index:2 relationship:2 minimizing:1 mostly:1 robert:1 negative:3 design:1 zt:1 summarization:5 unknown:1 perform:1 allowing:1 vertical:7 markov:2 datasets:1 jin:1 defining:2 head:2 intensity:2 compositionality:1 required:2 specified:4 kl:3 connection:6 california:2 nip:4 address:2 leonardis:1 computional:1 below:1 ev:5 including:2 max:10 belief:2 power:3 natural:6 eh:3 advanced:1 zhu:4 technology:2 catch:1 prior:2 limitation:1 proportional:2 triple:4 foundation:1 consistent:2 switchable:1 principle:6 translation:2 row:8 eccv:4 summary:2 mateescu:1 repeat:1 last:1 free:1 supported:1 neighbor:2 template:1 fifth:2 boundary:7 curve:1 contour:4 author:1 pruning:1 keep:1 active:1 belongie:1 fergus:1 search:4 triplet:1 quantifies:1 table:8 nature:1 learn:1 cl:3 dense:1 main:1 linearly:1 whole:1 child:28 repeated:2 body:2 xu:1 precision:1 sub:10 position:11 lie:1 third:2 down:15 removing:2 specific:3 list:1 magnitude:2 demand:1 chen:3 suited:1 depicted:2 intersection:1 appearance:3 truth:2 formulated:3 price:1 hard:1 change:5 determined:2 typical:3 reducing:1 torr:1 pas:3 experimental:1 indicating:1 select:1 puzicha:1 internal:1 ustc:1 tested:3 |
2,567 | 3,328 | Supervised topic models
Jon D. McAuliffe
Department of Statistics
University of Pennsylvania,
Wharton School
Philadelphia, PA
mcjon@wharton.upenn.edu
David M. Blei
Department of Computer Science
Princeton University
Princeton, NJ
blei@cs.princeton.edu
Abstract
We introduce supervised latent Dirichlet allocation (sLDA), a statistical model of
labelled documents. The model accommodates a variety of response types. We
derive a maximum-likelihood procedure for parameter estimation, which relies on
variational approximations to handle intractable posterior expectations. Prediction
problems motivate this research: we use the fitted model to predict response values
for new documents. We test sLDA on two real-world problems: movie ratings
predicted from reviews, and web page popularity predicted from text descriptions.
We illustrate the benefits of sLDA versus modern regularized regression, as well
as versus an unsupervised LDA analysis followed by a separate regression.
1
Introduction
There is a growing need to analyze large collections of electronic text. The complexity of document
corpora has led to considerable interest in applying hierarchical statistical models based on what are
called topics. Formally, a topic is a probability distribution over terms in a vocabulary. Informally,
a topic represents an underlying semantic theme; a document consisting of a large number of words
might be concisely modelled as deriving from a smaller number of topics. Such topic models provide
useful descriptive statistics for a collection, which facilitates tasks like browsing, searching, and
assessing document similarity.
Most topic models, such as latent Dirichlet allocation (LDA) [4], are unsupervised: only the words
in the documents are modelled. The goal is to infer topics that maximize the likelihood (or the posterior probability) of the collection. In this work, we develop supervised topic models, where each
document is paired with a response. The goal is to infer latent topics predictive of the response.
Given an unlabeled document, we infer its topic structure using a fitted model, then form its prediction. Note that the response is not limited to text categories. Other kinds of document-response
corpora include essays with their grades, movie reviews with their numerical ratings, and web pages
with counts of how many online community members liked them.
Unsupervised LDA has previously been used to construct features for classification. The hope was
that LDA topics would turn out to be useful for categorization, since they act to reduce data dimension [4]. However, when the goal is prediction, fitting unsupervised topics may not be a good
choice. Consider predicting a movie rating from the words in its review. Intuitively, good predictive
topics will differentiate words like ?excellent?, ?terrible?, and ?average,? without regard to genre.
But topics estimated from an unsupervised model may correspond to genres, if that is the dominant
structure in the corpus.
The distinction between unsupervised and supervised topic models is mirrored in existing
dimension-reduction techniques. For example, consider regression on unsupervised principal components versus partial least squares and projection pursuit [7], which both search for covariate linear
combinations most predictive of a response variable. These linear supervised methods have non1
parametric analogs, such as an approach based on kernel ICA [6]. In text analysis, McCallum et al.
developed a joint topic model for words and categories [8], and Blei and Jordan developed an LDA
model to predict caption words from images [2]. In chemogenomic profiling, Flaherty et al. [5]
proposed ?labelled LDA,? which is also a joint topic model, but for genes and protein function
categories. It differs fundamentally from the model proposed here.
This paper is organized as follows. We first develop the supervised latent Dirichlet allocation model
(sLDA) for document-response pairs. We derive parameter estimation and prediction algorithms for
the real-valued response case. Then we extend these techniques to handle diverse response types,
using generalized linear models. We demonstrate our approach on two real-world problems. First,
we use sLDA to predict movie ratings based on the text of the reviews. Second, we use sLDA to
predict the number of ?diggs? that a web page will receive in the www.digg.com community, a
forum for sharing web content of mutual interest. The digg count prediction for a page is based
on the page?s description in the forum. In both settings, we find that sLDA provides much more
predictive power than regression on unsupervised LDA features. The sLDA approach also improves
on the lasso, a modern regularized regression technique.
2
Supervised latent Dirichlet allocation
In topic models, we treat the words of a document as arising from a set of latent topics, that is, a
set of unknown distributions over the vocabulary. Documents in a corpus share the same set of K
topics, but each document uses a mix of topics unique to itself. Thus, topic models are a relaxation
of classical document mixture models, which associate each document with a single unknown topic.
Here we build on latent Dirichlet allocation (LDA) [4], a topic model that serves as the basis for
many others. In LDA, we treat the topic proportions for a document as a draw from a Dirichlet
distribution. We obtain the words in the document by repeatedly choosing a topic assignment from
those proportions, then drawing a word from the corresponding topic.
In supervised latent Dirichlet allocation (sLDA), we add to LDA a response variable associated
with each document. As mentioned, this variable might be the number of stars given to a movie, a
count of the users in an on-line community who marked an article interesting, or the category of a
document. We jointly model the documents and the responses, in order to find latent topics that will
best predict the response variables for future unlabeled documents.
We emphasize that sLDA accommodates various types of response: unconstrained real values, real
values constrained to be positive (e.g., failure times), ordered or unordered class labels, nonnegative
integers (e.g., count data), and other types. However, the machinery used to achieve this generality
complicates the presentation. So we first give a complete derivation of sLDA for the special case
of an unconstrained real-valued response. Then, in Section 2.3, we present the general version of
sLDA, and explain how it handles diverse response types.
Focus now on the case y ? R. Fix for a moment the model parameters: the K topics ?1:K (each
?k a vector of term probabilities), the Dirichlet parameter ?, and the response parameters ? and ? 2 .
Under the sLDA model, each document and response arises from the following generative process:
1. Draw topic proportions ? | ? ? Dir(?).
2. For each word
(a) Draw topic assignment z n | ? ? Mult(? ).
(b) Draw word wn | z n , ?1:K ? Mult(?z n ).
3. Draw response variable y | z 1:N , ?, ? 2 ? N ?> z? , ? 2 .
PN
Here we define z? := (1/N ) n=1
z n . The family of probability distributions corresponding to this
generative process is depicted as a graphical model in Figure 1.
Notice the response comes from a normal linear model. The covariates in this model are the (unobserved) empirical frequencies of the topics in the document. The regression coefficients on those
frequencies constitute ?. Note that a linear model usually includes an intercept term, which amounts
to adding a covariate that always equals one. Here, such a term is redundant, because the components of z? always sum to one.
2
?
?d
Zd,n
Wd,n
?k K
N
Yd
least
problem
unfortunately
supposed
worse
flat
dull
D
?, ? 2
bad
guys
watchable
its
not
one
movie
!
!30
Figure 1: (Left) A graphical model
representation of Supervised Latent
Dirichlet allocation. (Bottom) The
topics of a 10-topic sLDA model fit to
the movie review data of Section 3.
more
has
than
films
director
will
characters
!
!20
!
!10
have
like
you
was
just
some
out
awful
featuring
routine
dry
offered
charlie
paris
!!
not
about
movie
all
would
they
its
!
0
one
from
there
which
who
much
what
his
their
character
many
while
performance
between
!
both
motion
simple
perfect
fascinating
power
complex
!
!
10
however
cinematography
screenplay
performances
pictures
effective
picture
20
By regressing the response on the empirical topic frequencies, we treat the response as nonexchangeable with the words. The document (i.e., words and their topic assignments) is generated
first, under full word exchangeability; then, based on the document, the response variable is generated. In contrast, one could formulate a model in which y is regressed on the topic proportions
?. This treats the response and all the words as jointly exchangeable. But as a practical matter,
our chosen formulation seems more sensible: the response depends on the topic frequencies which
actually occurred in the document, rather than on the mean of the distribution generating the topics.
Moreover, estimating a fully exchangeable model with enough topics allows some topics to be used
entirely to explain the response variables, and others to be used to explain the word occurrences.
This degrades predictive performance, as demonstrated in [2].
We treat ?, ?1:K , ?, and ? 2 as unknown constants to be estimated, rather than random variables. We
carry out approximate maximum-likelihood estimation using a variational expectation-maximization
(EM) procedure, which is the approach taken in unsupervised LDA as well [4].
2.1
Variational E-step
Given a document and response, the posterior distribution of the latent variables is
p(?, z 1:N | w1:N , y, ?, ?1:K , ?, ? 2 ) =
Q
N
2
p(? | ?)
n=1 p(z n | ? ) p(wn | z n , ?1:K ) p(y | z 1:N , ?, ? )
. (1)
R
P
QN
2
d? p(? | ?) z 1:N
n=1 p(z n | ? ) p(wn | z n , ?1:K ) p(y | z 1:N , ?, ? )
The normalizing value is the marginal probability of the observed data, i.e., the document w1:N and
response y. This normalizer is also known as the likelihood, or the evidence. As with LDA, it is not
efficiently computable. Thus, we appeal to variational methods to approximate the posterior.
Variational objective function. We maximize the evidence lower bound (ELBO) L(?), which for a
single document has the form
log p w1:N , y | ?, ?1:K , ?, ? 2 ? L(? , ?1:N ; ?, ?1:K , ?, ? 2 ) = E[log p(? | ?)] +
N
X
n=1
E[log p(Z n | ?)] +
N
X
E[log p(wn | Z n , ?1:K )] + E[log p(y | Z 1:N , ?, ? 2 )] + H(q) . (2)
n=1
Here the expectation is taken with respect to a variational distribution q. We choose the fully factorized distribution,
QN
q(?, z 1:N | ? , ?1:N ) = q(? | ? ) n=1
q(z n | ?n ),
(3)
3
where ? is a K-dimensional Dirichlet parameter vector and each ?n parametrizes a categorical distribution over K elements. Notice E[Z n ] = ?n .
The first three terms and the entropy of the variational distribution are identical to the corresponding
terms in the ELBO for unsupervised LDA [4]. The fourth term is the expected log probability of the
response variable given the latent topic assignments,
. 2
1
E[log p(y | Z 1:N , ?, ? 2 )] = = ? log 2? ? 2 ? y 2 ? 2y?> E Z? + ?> E Z? Z? > ?
2? .
2
(4)
PN
The first expectation is E Z? = ?? := (1/N ) n=1
?n , and the second expectation is
P
>
P
PN
N
>+
E Z? Z?
= (1/N 2 )
?
?
diag{?
}
.
n
n=1
m6=n n m
n=1
(5)
> ] = E[Z ]E[Z ]> = ? ? > because the variational
To see (5), notice that for m 6= n, E[Z n Z m
n
m
n m
distribution is fully factorized. On the other hand, E[Z n Z n> ] = diag(E[Z n ]) = diag(?n ) because Z n
is an indicator vector.
For a single document-response pair, we maximize (2) with respect to ?1:N and ? to obtain an
estimate of the posterior. We use block coordinate-ascent variational inference, maximizing with
respect to each variational parameter vector in turn.
Optimization with respect to ? . The terms that involve the variational Dirichlet ? are identical to
those in unsupervised LDA, i.e., they do not involve the response variable y. Thus, the coordinate
ascent update is as in [4],
PN
? new ? ? + n=1
?n .
(6)
P
Optimization with respect to ? j . Define ?? j := n6= j ?n . Given j ? {1, . . . , N }. In [3], we
maximize the Lagrangian of the ELBO, which incorporates the constraint that the components of ? j
sum to one, and obtain the coordinate update
>
y
2 ? ?? j ? + (? ? ?)
new
??
. (7)
? j ? exp E[log ? | ? ] + E[log p(w j | ?1:K )] +
N?2
2N 2 ? 2
Exponentiating a vector means forming the vector of exponentials. The proportionality symbol
means the components of ? new
are computed according to (7), then normalized to sum to one. Note
j
P
that E[log ?i | ? ] = 9(?i ) ? 9( ? j ), where 9(?) is the digamma function.
The central difference between LDA and sLDA lies in this update. As in LDA, the jth word?s
variational distribution over topics depends on the word?s topic probabilities under the actual model
(determined by ?1:K ). But w j ?s variational distribution, and those of all other words, affect the
probability of the response, through the expected residual sum of squares (RSS), which is the second
term in (4). The end result is that the update (7) also encourages ? j to decrease this expected RSS.
The update (7) depends on the variational parameters ?? j of all other words. Thus, unlike LDA, the
? j cannot be updated in parallel. Distinct occurrences of the same term are treated separately.
2.2
M-step and prediction
The corpus-level ELBO lower bounds the joint log likelihood across documents, which is the sum of
the per-document log-likelihoods. In the E-step, we estimate the approximate posterior distribution
for each document-response pair using the variational inference algorithm described above. In the
M-step, we maximize the corpus-level ELBO with respect to the model parameters ?1:K , ?, and ? 2 .
For our purposes, it suffices simply to fix ? to 1/K times the ones vector. In this section, we add
document indexes to the previous section?s quantities, so y becomes yd and Z? becomes Z? d .
Estimating the topics. The M-step updates of the topics ?1:K are the same as for unsupervised
LDA, where the probability of a word under a topic is proportional to the expected number of times
that it was assigned to that topic [4],
new
??k,w
?
D X
N
X
k
1(wd,n = w)?d,n
.
d=1 n=1
4
(8)
Here again, proportionality means that each ??knew is normalized to sum to one.
Estimating the regression parameters. The only terms of the corpus-level ELBO involving ? and
? 2 come from the corpus-level analog of (4).
Define y = y1:D as the vector of response values across documents. Let A be the D ? (K + 1)
matrix whose rows are the vectors Z? d> . Then the corpus-level version of (4) is
i
D
1 h
E[log p(y | A, ?, ? 2 )] = ? log(2? ? 2 ) ?
E (y ? A?)> (y ? A?) .
(9)
2
2
2?
Here the expectation is over the matrix A, using the variational distribution parameters chosen in
the previous E-step. Expanding the inner product, using linearity of expectation, and applying the
first-order condition for ?, we arrive at an expected-value version of the normal equations:
?1
E[A]> y .
(10)
E A> A ? = E[A]> y
?
?? new ? E A> A
Note that thedth row
ofPE[A] is just ?? d , and all these average vectors were fixed in the previous Estep. Also, E A> A = d E Z? d Z? d> , with each term having a fixed value from the previous E-step
as well, given by (5). We caution again: formulas in the previous section, such as (5), suppress the
document indexes which appear here.
We now apply the first-order condition for ? 2 to (9) and evaluate the solution at ?? new , obtaining:
?1
2
E[A]> y} .
(11)
? (1/D){y > y ? y > E[A] E A> A
?? new
Prediction. Our focus in applying sLDA is prediction. Specifically, we wish to compute the expected response value, given a new document w1:N and a fitted model {?, ?1:K , ?, ? 2 }:
E[Y | w1:N , ?, ?1:K , ?, ? 2 ] = ?> E[ Z? | w1:N , ?, ?1:K ].
(12)
?
The identity follows easily from iterated expectation. We approximate the posterior mean of Z using
the variational inference procedure of the previous section. But here, the terms depending on y are
removed from the ? j update in (7). Notice this is the same as variational inference for unsupervised
LDA: since we averaged the response variable out of the right-hand side in (12), what remains is the
standard unsupervised LDA model for Z 1:N and ?.
Thus, given a new document, we first compute Eq [Z 1:N ], the variational posterior distribution of the
latent variables Z n . Then, we estimate the response with
?
E[Y | w1:N , ?, ?1:K , ?, ? 2 ] ? ?> Eq [ Z? ] = ?> ?.
2.3
(13)
Diverse response types via generalized linear models
Up to this point, we have confined our attention to an unconstrained real-valued response variable.
In many applications, however, we need to predict a categorical label, or a non-negative integral
count, or a response with other kinds of constraints. Sometimes it is reasonable to apply a normal
linear model to a suitably transformed version of such a response. When no transformation results
in approximate normality, statisticians often make use of a generalized linear model, or GLM [9].
In this section, we describe sLDA in full generality, replacing the normal linear model of the earlier
exposition with a GLM formulation. As we shall see, the result is a generic framework which can be
specialized in a straightforward way to supervised topic models having a variety of response types.
There are two main ingredients in a GLM: the ?random component? and the ?systematic component.? For the random component, one takes the distribution of the response to be an exponential
dispersion family with natural parameter ? and dispersion parameter ?:
? y ? A(? )
p(y | ?, ?) = h(y, ?) exp
.
(14)
?
For each fixed ?, (14) is an exponential family, with base measure h(y, ?), sufficient statistic y,
and log-normalizer A(? ). The dispersion parameter provides additional flexibility in modeling the
variance of y. Note that (14) need not be an exponential family jointly in (?, ?).
5
In the systematic component of the GLM, we relate the exponential-family parameter ? of the random component to a linear combination of covariates ? the so-called linear predictor. For sLDA,
the linear predictor is ?> z? . In fact, we simply set ? = ?> z? . Thus, in the general version of sLDA,
the previous specification in step 3 of the generative process is replaced with
y | z 1:N , ?, ? ? GLM(?z , ?, ?) ,
(15)
so that
?> (?z y) ? A(?> z? )
p(y | z 1:N , ?, ?) = h(y, ?) exp
.
(16)
?
The reader familiar with GLMs will recognize that our choice of systematic component means sLDA
uses only canonical link functions. In future work, we will relax this constraint.
We now have the flexibility to model any type of response variable whose distribution can be written
in exponential dispersion form (14). As is well known, this includes many commonly used distributions: the normal; the binomial (for binary response); the Poisson and negative binomial (for count
data); the gamma, Weibull, and inverse Gaussian (for failure time data); and others. Each of these
distributions corresponds to a particular choice of h(y, ?) ?
and A(? ). For example, it is easy to show
that the normal distribution corresponds to h(y, ?) = (1/ 2? ?) exp{?y 2 /(2?)} and A(? ) = ? 2 /2.
In this case, the usual parameters ? and ? 2 just equal ? and ?, respectively.
Variational E-step. The distribution of y appears only in the cross-entropy term (4). Its form under
the GLM is
i
1 h > ?
E[log p(y | Z 1:N , ?, ?)] = log h(y, ?) +
(17)
? E Z y ? E A(?> Z? ) .
?
This changes the coordinate ascent step for each ? j , but the variational optimization is otherwise
unaffected. In particular, the gradient of the ELBO with respect to ? j becomes
y
o
?L
1
? n
= E[log ? | ? ]+E[log p(w j | ?1:K )]?log ? j +1+
E A(?> Z? ) . (18)
??
?? j
N?
? ?? j
Thus, the key to variational inference in sLDA is obtaining the gradient of the expected GLM lognormalizer. Sometimes there is an exact expression, such as the normal case of Section 2. As another
example, the Poisson GLM leads to an exact gradient, which we omit for brevity.
Other times, no exact gradient is available. In a longer paper [3], we study two methods for this
situation. First, we can replace ?E[A(?> Z? )] with an adjustable lower bound whose gradient is
known exactly; then we maximize over the original variational parameters plus the parameter controlling the bound. Alternatively, an application of the multivariate delta method for moments [1],
plus standard exponential family theory, shows
? + VarGLM (Y | ? = ?> ?)
? ? ?> Varq ( Z? )? .
(19)
E A(?> Z? ) ? A(?> ?)
Here, VarGLM denotes the response variance under the GLM, given a specified value of the natural parameter?in all standard cases, this variance is a closed-form function of ? j . The variancecovariance matrix of Z? under q is already known in closed from from E[ Z? ] and (5). Thus, computing
?/?? j of (19) exactly is mechanical. However, using this approximation gives up the usual guarantee that the ELBO lower bounds the marginal likelihood. We forgo details and further examples due
to space constraints.
The GLM contribution to the gradient determines whether the ? j coordinate update itself has a
closed form, as it does in the normal case (7) and the Poisson case (omitted). If the update is not
closed-form, we use numerical optimization, supplying a gradient obtained from one of the methods
described in the previous paragraph.
Parameter estimation (M-step). The topic parameter estimates are given by (8), as before. For the
corpus-level ELBO, the gradient with respect to ? becomes
( D
)
D
D
o 1 X
X
? 1 Xn >
?? d yd ?
(20)
? ?? d yd ? E A(?> Z? d ) =
Eq ?(?> Z? d ) Z? d .
?? ?
?
d=1
d=1
d=1
The appearance of ?(?) = EGLM [Y | ? = ?] follows from exponential family properties. This GLM
mean response is a known function of ?> Z? d in all standard cases. However, Eq [?(?> Z? d ) Z? d ] has
6
Digg corpus
0.0
sLDA
LDA
5
10
15
20
25
30
35
40
45
50
Number of topics
?
?
?
?
?
?
?
10
15
20
25
30
35
40
45
?
?
?
5
?
50
Number of topics
?
? ?
2 4
?
?
?
?
10
20
30
?8.2
?8.1
?8.0
?
? ?
? ? ? ?
?
?8.3
?
?
?
?8.4
?
?
?
?
?
?
?8.5
Per?word held out log likelihood
0.10
?
0.08
?
0.06
?6.38
?
?8.6
?
?
?
?
Predictive R2
?
?
?
0.04
?
?
?
?
0.02
?
?
?
?
?
0.00
0.2
?
?
?
?6.39
0.3
0.4
?
?6.37
?
0.12
?
?6.40
?
Per?word held out log likelihood
?
?
?6.41
?
?
0.1
Predictive R2
?
?6.42
0.5
Movie corpus
?
2 4
Number of topics
10
20
30
Number of topics
Figure 2: Predictive R2 and per-word likelihood for the movie and Digg data (see Section 3).
an exact solution only in some cases (e.g. normal, Poisson). In other cases, we approximate the
expectation with methods similar to those applied for the ? j coordinate update. Reference [3] has
details, including estimation of ? and prediction, where we encounter the same issues.
The derivative with respect to ?, evaluated at ?? new , is
( D
) ( D
)
D
X ?h(yd , ?)/??
X
X
1
>
? 2
?? d yd ?
Eq ?(?? new Z? d ) Z? d .
h(yd , ?)
?
d=1
d=1
(21)
d=1
Given that the rightmost summation has been evaluated, exactly or approximately, during the ?
optimization, (21) has a closed form. Depending on h(y, ?) and its partial with respect to ?, we
obtain ??new either in closed form or via one-dimensional numerical optimization.
Prediction. We form predictions just as in Section 2.2. The difference is that we now approximate
the expected response value of a test document as
E[Y | w1:N , ?, ?1:K , ?, ?] ? Eq [?(?> Z? )].
(22)
Again, this follows from iterated expectation plus the variational approximation. When the variational expectation cannot be computed exactly, we apply the approximation methods we relied on
for the GLM E-step and M-step. We defer specifics to [3].
3
Empirical results
We evaluated sLDA on two prediction problems. First, we consider ?sentiment analysis? of newspaper movie reviews. We use the publicly available data introduced in [10], which contains movie
reviews paired with the number of stars given. While Pang and Lee treat this as a classification
problem, we treat it as a regression problem. With a 5000-term vocabulary chosen by tf-idf, the
corpus contains 5006 documents and comprises 1.6M words.
Second, we introduce the problem of predicting web page popularity on Digg.com. Digg is a community of users who share links to pages by submitting them to the Digg homepage, with a short
description. Once submitted, other users ?digg? the links they like. Links are sorted on the Digg
homepage by the number of diggs they have received. Our Digg data set contains a year of link
descriptions, paired with the number of diggs each received during its first week on the homepage.
(This corpus will be made publicly available at publication.) We restrict our attention to links in the
technology category. After trimming the top ten outliers, and using a 4145-term vocabulary chosen
by tf-idf, the Digg corpus contains 4078 documents and comprises 94K words.
For both sets of response variables, we transformed to approximate normality by taking logs. This
makes the data amenable to the continuous-response model of Section 2; for these two problems,
generalized linear modeling turned out to be unnecessary. We initialized ?1:K to uniform topics, ? 2
to the sample variance of the response, and ? to a grid on [?1, 1] in increments of 2/K . We ran EM
until the relative change in the corpus-level likelihood bound was less than 0.01%. In the E-step,
we ran coordinate-ascent variational inference for each document until the relative change in the
7
per-document ELBO was less than 0.01%. For the movie review data set, we illustrate in Figure 1 a
matching of the top words from each topic to the corresponding coefficient ?k .
We assessed the quality of the predictions with ?predictive R2 .? In our 5-fold cross-validation (CV),
we defined this quantity as the fraction of variabilityPin the out-of-fold
P response values which is
captured by the out-of-fold predictions: pR2 := 1 ? ( (y ? y? )2 )/( (y ? y? )2 ).
We compared sLDA to linear regression on the ?? d from unsupervised LDA. This is the regression
equivalent of using LDA topics as classification features [4].Figure 2 (L) illustrates that sLDA provides improved predictions on both data sets. Moreover, this improvement does not come at the cost
of document model quality. The per-word hold-out likelihood comparison in Figure 2 (R) shows that
sLDA fits the document data as well or better than LDA. Note that Digg prediction is significantly
harder than the movie review sentiment prediction, and that the homogeneity of Digg technology
content leads the model to favor a small number of topics.
Finally, we compared sLDA to the lasso, which is L1 -regularized least-squares regression. The
lasso is a widely used prediction method for high-dimensional problems. We used each document?s
empirical distribution over words as its lasso covariates, setting the lasso complexity parameter with
5-fold CV. On Digg data, the lasso?s optimal model complexity yielded a CV pR2 of 0.088. The best
sLDA pR2 was 0.095, an 8.0% relative improvement. On movie data, the best Lasso pR2 was 0.457
versus 0.500 for sLDA, a 9.4% relative improvement. Note moreover that the Lasso provides only a
prediction rule, whereas sLDA models latent structure useful for other purposes.
4
Discussion
We have developed sLDA, a statistical model of labelled documents. The model accommodates the
different types of response variable commonly encountered in practice. We presented a variational
procedure for approximate posterior inference, which we then incorporated in an EM algorithm
for maximum-likelihood parameter estimation. We studied the model?s predictive performance on
two real-world problems. In both cases, we found that sLDA moderately improved on the lasso,
a state-of-the-art regularized regression method. Moreover, the topic structure recovered by sLDA
had higher hold-out likelihood than LDA on one problem, and equivalent hold-out likelihood on the
other. These results illustrate the benefits of supervised dimension reduction when prediction is the
ultimate goal.
Acknowledgments
David M. Blei is supported by grants from Google and the Microsoft Corporation.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
P. Bickel and K. Doksum. Mathematical Statistics. Prentice Hall, 2000.
D. Blei and M. Jordan. Modeling annotated data. In SIGIR, pages 127?134. ACM Press, 2003.
D. Blei and J. McAuliffe. Supervised topic models. In preparation, 2007.
D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
P. Flaherty, G. Giaever, J. Kumm, M. Jordan, and A. Arkin. A latent variable model for
chemogenomic profiling. Bioinformatics, 21(15):3286?3293, 2005.
K. Fukumizu, F. Bach, and M. Jordan. Dimensionality reduction for supervised learning with
reproducing kernel Hilbert spaces. Journal of Machine Learning Research, 5:73?99, 2004.
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. 2001.
A. McCallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In AAAI, 2006.
P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman & Hall, 1989.
B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization
with respect to rating scales. In Proceedings of the ACL, 2005.
8
| 3328 |@word version:5 proportion:4 seems:1 suitably:1 proportionality:2 essay:1 r:2 harder:1 carry:1 moment:2 reduction:3 contains:4 document:47 rightmost:1 existing:1 recovered:1 com:2 wd:2 written:1 numerical:3 update:10 generative:4 mccallum:2 short:1 supplying:1 blei:7 provides:4 nonexchangeable:1 mathematical:1 director:1 fitting:1 paragraph:1 introduce:2 upenn:1 expected:8 ica:1 growing:1 multi:1 grade:1 actual:1 becomes:4 estimating:3 underlying:1 moreover:4 linearity:1 factorized:2 homepage:3 what:3 kind:2 weibull:1 developed:3 caution:1 unobserved:1 transformation:1 corporation:1 nj:1 guarantee:1 act:1 exactly:4 exchangeable:2 grant:1 omit:1 appear:1 mcauliffe:2 positive:1 before:1 treat:7 yd:7 approximately:1 might:2 plus:3 acl:1 studied:1 limited:1 averaged:1 unique:1 practical:1 acknowledgment:1 practice:1 block:1 differs:1 procedure:4 empirical:4 mult:2 significantly:1 projection:1 matching:1 word:29 seeing:1 protein:1 cannot:2 unlabeled:2 prentice:1 applying:3 intercept:1 www:1 equivalent:2 demonstrated:1 lagrangian:1 maximizing:1 straightforward:1 attention:2 sigir:1 formulate:1 rule:1 deriving:1 his:1 handle:3 searching:1 coordinate:7 increment:1 updated:1 controlling:1 user:3 caption:1 exact:4 us:2 arkin:1 pa:1 associate:1 element:2 bottom:1 observed:1 wang:1 decrease:1 removed:1 ran:2 mentioned:1 complexity:3 covariates:3 moderately:1 motivate:1 predictive:10 basis:1 easily:1 joint:3 various:1 genre:2 derivation:1 distinct:1 effective:1 describe:1 choosing:1 whose:3 slda:33 film:1 valued:3 widely:1 drawing:1 elbo:10 relax:1 otherwise:1 favor:1 statistic:4 jointly:3 itself:2 varq:1 online:1 differentiate:1 descriptive:1 product:1 turned:1 flexibility:2 achieve:1 supposed:1 description:4 exploiting:1 assessing:1 categorization:2 liked:1 perfect:1 generating:1 derive:2 illustrate:3 develop:2 depending:2 school:1 received:2 eq:6 c:1 predicted:2 come:3 annotated:1 fix:2 suffices:1 summation:1 hold:3 hall:2 normal:9 exp:4 predict:6 week:1 bickel:1 omitted:1 purpose:2 estimation:6 label:2 tf:2 chemogenomic:2 hope:1 fukumizu:1 always:2 gaussian:1 rather:2 pn:4 exchangeability:1 publication:1 focus:2 improvement:3 likelihood:15 contrast:1 normalizer:2 digamma:1 inference:7 transformed:2 issue:1 classification:4 constrained:1 special:1 art:1 mutual:1 marginal:2 wharton:2 construct:1 equal:2 having:2 ng:1 once:1 chapman:1 identical:2 represents:1 unsupervised:15 jon:1 future:2 parametrizes:1 others:3 fundamentally:1 modern:2 gamma:1 recognize:1 homogeneity:1 familiar:1 replaced:1 consisting:1 statistician:1 microsoft:1 friedman:1 interest:2 trimming:1 variancecovariance:1 regressing:1 mixture:1 held:2 amenable:1 integral:1 partial:2 machinery:1 initialized:1 fitted:3 complicates:1 earlier:1 modeling:3 assignment:4 maximization:1 cost:1 predictor:2 uniform:1 pal:1 watchable:1 dir:1 systematic:3 lee:2 druck:1 w1:8 again:3 aaai:1 central:1 pr2:4 choose:1 guy:1 worse:1 derivative:1 star:3 unordered:1 includes:2 coefficient:2 matter:1 depends:3 closed:6 analyze:1 relied:1 parallel:1 defer:1 contribution:1 pang:2 square:3 publicly:2 variance:4 who:3 efficiently:1 correspond:1 dry:1 modelled:2 iterated:2 unaffected:1 doksum:1 submitted:1 explain:3 sharing:1 failure:2 frequency:4 associated:1 improves:1 dimensionality:1 organized:1 hilbert:1 routine:1 actually:1 appears:1 higher:1 supervised:13 response:53 improved:2 formulation:2 evaluated:3 generality:2 just:4 until:2 glms:1 hand:2 web:5 replacing:1 google:1 quality:2 lda:25 normalized:2 assigned:1 dull:1 semantic:1 during:2 encourages:1 generalized:5 complete:1 demonstrate:1 motion:1 l1:1 image:1 variational:27 specialized:1 analog:2 extend:1 occurred:1 cv:3 unconstrained:3 grid:1 had:1 specification:1 similarity:1 longer:1 add:2 base:1 dominant:1 posterior:9 multivariate:1 binary:1 captured:1 additional:1 maximize:6 redundant:1 full:2 mix:1 infer:3 profiling:2 cross:2 bach:1 paired:3 prediction:20 involving:1 regression:12 expectation:11 poisson:4 kernel:2 sometimes:2 confined:1 receive:1 whereas:1 separately:1 unlike:1 ascent:4 facilitates:1 member:1 incorporates:1 jordan:5 integer:1 enough:1 wn:4 m6:1 variety:2 affect:1 fit:2 easy:1 pennsylvania:1 lasso:9 restrict:1 hastie:1 reduce:1 inner:1 computable:1 whether:1 expression:1 ultimate:1 sentiment:3 constitute:1 repeatedly:1 useful:3 informally:1 involve:2 amount:1 ten:1 category:5 terrible:1 mirrored:1 canonical:1 notice:4 estimated:2 arising:1 popularity:2 per:6 delta:1 tibshirani:1 diverse:3 zd:1 shall:1 key:1 relaxation:1 fraction:1 sum:6 year:1 inverse:1 you:1 fourth:1 arrive:1 family:7 reasonable:1 reader:1 electronic:1 draw:5 entirely:1 bound:6 followed:1 fold:4 fascinating:1 encountered:1 nonnegative:1 yielded:1 constraint:4 idf:2 flat:1 regressed:1 estep:1 department:2 according:1 combination:2 smaller:1 across:2 em:3 character:2 intuitively:1 outlier:1 glm:12 taken:2 equation:1 previously:1 remains:1 turn:2 count:6 serf:1 end:1 pursuit:1 available:3 apply:3 hierarchical:1 generic:1 occurrence:2 encounter:1 original:1 binomial:2 dirichlet:12 include:1 charlie:1 denotes:1 graphical:2 top:2 clustering:1 build:1 forum:2 classical:1 objective:1 already:1 quantity:2 parametric:1 degrades:1 usual:2 flaherty:2 gradient:8 separate:1 link:6 accommodates:3 sensible:1 topic:61 index:2 relationship:1 unfortunately:1 relate:1 negative:2 suppress:1 unknown:3 adjustable:1 dispersion:4 situation:1 incorporated:1 y1:1 reproducing:1 community:4 rating:5 david:2 introduced:1 pair:3 paris:1 specified:1 mechanical:1 concisely:1 distinction:1 dth:1 usually:1 including:1 power:2 treated:1 natural:2 regularized:4 predicting:2 indicator:1 residual:1 normality:2 movie:15 technology:2 picture:2 categorical:2 n6:1 philadelphia:1 text:5 review:9 relative:4 fully:3 interesting:1 allocation:8 proportional:1 versus:4 ingredient:1 validation:1 offered:1 sufficient:1 article:1 share:2 row:2 featuring:1 supported:1 jth:1 side:1 taking:1 benefit:2 regard:1 dimension:3 vocabulary:4 world:3 xn:1 qn:2 collection:3 commonly:2 exponentiating:1 made:1 newspaper:1 approximate:9 emphasize:1 gene:1 corpus:16 unnecessary:1 knew:1 discriminative:1 nelder:1 alternatively:1 search:1 latent:16 continuous:1 expanding:1 obtaining:2 excellent:1 complex:1 diag:3 main:1 theme:1 comprises:2 awful:1 wish:1 exponential:8 lie:1 digg:14 jmlr:1 formula:1 bad:1 specific:1 covariate:2 symbol:1 appeal:1 r2:4 submitting:1 normalizing:1 evidence:2 intractable:1 adding:1 illustrates:1 browsing:1 entropy:2 depicted:1 led:1 simply:2 appearance:1 forming:1 ordered:1 corresponds:2 determines:1 relies:1 acm:1 conditional:1 goal:4 marked:1 presentation:1 identity:1 exposition:1 sorted:1 labelled:3 replace:1 considerable:1 content:2 change:3 mccullagh:1 determined:1 specifically:1 principal:1 called:2 forgo:1 formally:1 arises:1 assessed:1 brevity:1 bioinformatics:1 preparation:1 evaluate:1 princeton:3 |
2,568 | 3,329 | Optimistic Linear Programming gives Logarithmic
Regret for Irreducible MDPs
Ambuj Tewari
Computer Science Division
Univeristy of California, Berkeley
Berkeley, CA 94720, USA
ambuj@cs.berkeley.edu
Peter L. Bartlett
Computer Science Division and Department of Statistics
University of California, Berkeley
Berkeley, CA 94720, USA
bartlett@cs.berkeley.edu
Abstract
We present an algorithm called Optimistic Linear Programming (OLP) for learning to optimize average reward in an irreducible but otherwise unknown Markov
decision process (MDP). OLP uses its experience so far to estimate the MDP. It
chooses actions by optimistically maximizing estimated future rewards over a set
of next-state transition probabilities that are close to the estimates, a computation
that corresponds to solving linear programs. We show that the total expected reward obtained by OLP up to time T is within C(P ) log T of the reward obtained
by the optimal policy, where C(P ) is an explicit, MDP-dependent constant. OLP
is closely related to an algorithm proposed by Burnetas and Katehakis with four
key differences: OLP is simpler, it does not require knowledge of the supports
of transition probabilities, the proof of the regret bound is simpler, but our regret
bound is a constant factor larger than the regret of their algorithm. OLP is also
similar in flavor to an algorithm recently proposed by Auer and Ortner. But OLP
is simpler and its regret bound has a better dependence on the size of the MDP.
1
Introduction
Decision making under uncertainty is one of the principal concerns of Artificial Intelligence and
Machine Learning. Assuming that the decision maker or agent is able to perfectly observe its own
state, uncertain systems are often modeled as Markov decision processes (MDPs). Given complete
knowledge of the parameters of an MDP, there are standard algorithms to compute optimal policies,
i.e., rules of behavior such that some performance criterion is maximized. A frequent criticism of
these algorithms is that they assume an explicit description of the MDP which is seldom available.
The parameters constituting the description are themselves estimated by simulation or experiment
and are thus not known with complete reliability. Taking this into account brings us to the well
known exploration vs. exploitation trade-off. On one hand, we would like to explore the system as
well as we can to obtain reliable knowledge about the system parameters. On the other hand, if we
keep exploring and never exploit the knowledge accumulated, we will not behave optimally.
Given a policy ?, how do we measure its ability to handle this trade-off? Suppose the agent gets a
numerical reward at each time step and we measure performance by the accumulated reward over
time. Then, a meaningful quantity to evaluate the policy ? is its regret over time. To understand
what regret means, consider an omniscient agent who knows all parameters of the MDP accurately
and behaves optimally. Let VT be the expected reward obtained by this agent up to time T . Let VT?
denote the corresponding quantity for ?. Then the regret RT? = VT ? VT? measures how much ? is
hurt due to its incomplete knowledge of the MDP up to time T . If we can show that the regret RT?
grows slowly with time T , for all MDPs in a sufficiently big class, then we can safely conclude that
? is making a judicious trade-off between exploration and exploitation. It is rather remarkable that
1
for this notion of regret, logarithmic bounds have been proved in the literature [1,2]. This means that
there are policies ? with RT? = O(log T ). Thus the per-step regret RT? /T goes to zero very quickly.
Burnetas and Katehakis [1] proved that for any policy ? (satisfying certain reasonable assumptions)
RT? ? CB (P ) log T where they identified the constant CB (P ). This constant depends on the transition function P of the MDP1 . They also gave an algorithm (we call it BKA) that achieves this rate
and is therefore optimal in a very strong sense. However, besides assuming that the MDP is irreducible (see Assumption 1 below) they assumed that the support sets of the transition distributions
pi (a) are known for all state-action pairs. In this paper, we not only get rid of this assumption but
our optimistic linear programming (OLP) algorithm is also computationally simpler. At each step,
OLP considers certain parameters in the vicinity of the estimates. Like BKA, OLP makes optimistic
choices among these. But now, making these choices only involves solving linear programs (LPs)
to maximize linear functions over L1 balls. BKA instead required solving non-linear (though convex) programs due to the use of KL-divergence. Another benefit of using the L1 distance is that
it greatly simplifies a significant part of the proof. The price we pay for these advantages is that
the regret of OLP is C(P ) log T asymptotically, for a constant C(P ) ? CB (P ). We should note
here that a number of algorithms in the literature have been inspired by the ?optimism in the face of
uncertainty? principle [3]?[7].
The algorithm of Auer and Ortner (we refer to it as AOA) is another logarithmic regret algorithm for
irreducible2 MDPs. AOA does not solve an optimization problem at every time step but only when
a confidence interval is halved. But then the optimization problem they solve is more complicated
because they find a policy to use in the next few time steps by optimizing over a set of MDPs. The
regret of AOA is CA (P ) log T where
CA (P ) = c
|S|5 |A|Tw (P )?(P )2
,
?? (P )2
(1)
for some universal constant c. Here |S|, |A| denote the state and action space size, Tw (P ) is the worst
case hitting time over deterministic policies (see Eqn. (12)) and ?? (P ) is the difference between
the long term average return of the best policy and that of the next best policy. The constant ?(P ) is
also defined in terms of hitting times. Under Auer and Ortner?s assumption of bounded rewards, we
can show that the constant for OLP satisfies
C(P ) ?
2|S||A|T (P )2
.
?? (P )
(2)
Here T (P ) is the hitting time of an optimal policy is therefore necessarily smaller than Tw (P ). We
get rid of the dependence on ?(P ) while replacing Tw (P ) with T (P )2 . Most importantly, we significantly improve the dependence on the state space size. The constant ?? (P ) can roughly be thought
of as the minimum (over states) difference between the quality of the best and the second best action (see Eqn. (9)). The constants ?? (P ) and ?? (P ) are similar though not directly comparable.
Nevertheless, note that C(P ) depends inversely on ?? (P ) not ?? (P )2 .
2
Preliminaries
Consider an MDP (S, A, R, P ) where S is the set of states, A = ?i?S A(i) is the set of actions
(A(i) being the actions available in state i), R = {r(i, a)}i?S,a?A(i) are the rewards and P =
{pi,j (a)}i,j?S,a?A(i) are the transition probabilities. For simplicity of analysis, we assume that
the rewards are known to us beforehand. We do not assume that we know the support sets of the
distributions pi (a).
The history ?t up to time t is a sequence i0 , k0 , . . . , it?1 , kt?1 , it such that ks ? A(is ) for all s < t.
A policy ? is a sequence {?t } of probability distributions on A given ?t such that ?t (A(st )|?t ) = 1
where st denotes the random variable representing the state at time t. The set of all policies is
denoted by ?. A deterministic policy is simply a function ? : S ? A such that ?(i) ? A(i).
Denote the set of deterministic policies by ?D . If D is a subset of A, let ?(D) denote the set of
1
Notation for MDP parameters is defined in Section 2 below.
Auer & Ortner prove claims for unichain MDPs but their usage seems non-standard. The MDPs they call
unichain are called irreducible in standard textbooks (for example, see [9, p. 348])
2
2
policies that take actions in D. Probability and expectation under a policy ?, transition function P
and starting state i0 will be denoted by P?,P
and E?,P
respectively. Given history ?t , let Nt (i),
i0
i0
Nt (i, a) and Nt (i, a, j) denote the number of occurrences of the state i, the pair (i, a) and the triplet
(i, a, j) respectively in ?t .
We make the following irreducibility assumption regarding the MDP.
Assumption 1. For all ? ? ?D , the transition matrix P ? = (pi,j (?(i)))i,j?S is irreducible (i.e. it
is possible to reach any state from any other state).
Consider the rewards accumulated by the policy ? before time T ,
T
?1
X
VT? (i0 , P ) := E?,P
i0 [
r(st , at )] ,
t=0
where at is the random variable representing the action taken by ? at time t. Let VT (i0 , P ) be the
maximum possible sum of expected rewards before time T ,
VT (i0 , P ) := sup VT? (i0 , P ) .
???
The regret of a policy ? at time T is a measure of how well the expected rewards of ? compare with
the above quantity,
RT? (i0 , P ) := VT (i0 , P ) ? VT? (i0 , P ) .
Define the long term average reward of a policy ? as
V ? (i0 , P )
.
?? (i0 , P ) := lim inf T
T ??
T
Under assumption 1, the above limit exists and is independent of the starting state i0 . Given a
restricted set D ? A of actions, the gain or the best long term average performance is
?(P, D) := sup ?? (i0 , P ) .
???(D)
As a shorthand, define ?? (P ) := ?(P, A).
2.1
Optimality Equations
A restricted problem (P, D) is obtained from the original MDP by choosing subsets D(i) ? A(i)
and setting D = ?i?S D(i). The transition and reward functions of the restricted problems are
simply the restrictions of P and r to D. Assumption 1 implies that there is a bias vector h(P, D) =
{h(i; P, D)}i?S such that the gain ?(P, D) and bias h(P, D) are the unique solutions to the average
reward optimality equations:
?i ? S, ?(P, D) + h(i; P, D) = max [r(i, a) + hpi (a), h(P, D)i] .
(3)
a?D(i)
?
We will use h (P ) to denote h(P, A). Also, denote the infinity norm kh? (P )k? by H ? (P ). Note
that if h? (P ) is a solution to the optimality equations and e is the vector of ones, then h? (P ) + ce
is also a solution for any scalar c. We can therefore assume ?i? ? S, h? (i? ; P ) = 0 without any loss
of generality.
It will be convenient to have a way to denote the quantity inside the ?max? that appears in the
optimality equations. Accordingly, define
L(i, a, p, h) := r(i, a) + hp, hi ,
?
L (i; P, D) := max L(i, a, pi (a), h(P, D)) .
a?D(i)
To measure the degree of suboptimality of actions available at a state, define
?? (i, a; P ) = L? (i; P, A) ? L(i, a, pi (a), h? (P )) .
Note that the optimal actions are precisely those for which the above quantity is zero.
O(i; P, D) := {a ? D(i) : ?? (i, a; P ) = 0} ,
O(P, D) := ?i?S O(i; P, D) .
Any policy in O(P, D) is an optimal policy, i.e.,
?? ? O(P, D), ?? (P ) = ?(P, D) .
3
2.2
Critical pairs
From now on, ?+ will denote the probability simplex of dimension determined by context. For a
suboptimal action a ?
/ O(i; P, A), the following set contains probability distributions q such that if
pi (a) is changed to q, the quality of action a comes within of an optimal action. Thus, q makes a
look almost optimal:
MakeOpt(i, a; P, ) := {q ? ?+ : L(i, a, q, h? (P )) ? L? (i; P, A) ? } .
(4)
Those suboptimal state-action pairs for which MakeOpt is never empty, no matter how small is,
play a crucial role in determining the regret. We call these critical state-action pairs,
Crit(P ) := {(i, a) : a ?
/ O(i; P, A) ? (? > 0, MakeOpt(i, a; P, ) 6= ?)} .
(5)
Define the function,
Ji,a (p; P, ) := inf{kp ? qk21 : q ? MakeOpt(i, a; P, )} .
(6)
To make sense of this definition, consider p = pi (a). The above infimum is then the least distance
(in the L1 sense) one has to move away from pi (a) to make the suboptimal action a look -optimal.
Taking the limit of this as decreases gives us a quantity that also plays a crucial role in determining
the regret,
K(i, a; P ) := lim Ji,a (pi (a); P, ) .
(7)
?0
Intuitively, if K(i, a; P ) is small, it is easy to confuse a suboptimal action with an optimal one and
so it should be difficult to achieve small regret. The constant that multiplies log T in the regret bound
of our algorithm OLP (see Algorithm 1 and Theorem 4 below) is the following:
X
2?? (i, a; P )
.
(8)
C(P ) :=
K(i, a; P )
(i,a)?Crit(P )
This definition might look a bit hard to interpret, so we give an upper bound on C(P ) just in terms
of the infinity norm H ? (P ) of the bias and ?? (P ). This latter quantity is defined below to be the
minimum degree of suboptimality of a critical action.
Proposition 2. Suppose A(i) = A for all i ? S. Define
?? (P ) :=
min
?? (i, a; P ) .
(9)
(i,a)?Crit(P )
Then, for any P ,
C(P ) ?
2|S||A|H ? (P )2
.
?? (P )
See the appendix for a proof.
2.3
Hitting times
It turns out that we can bound the infinity norm of the bias in terms of the hitting time of an optimal
policy. For any policy ? define its hitting time to be the worst case expected time to reach one state
from another:
T? (P ) := max E?,P
[min{t > 0 : st = i}] .
(10)
j
i6=j
The following constant is the minimum hitting time among optimal policies:
T (P ) := min T? (P ) .
??O(P,D)
(11)
The following constant is defined just for comparison with results in [2]. It is the worst case hitting
time over all policies:
Tw (P ) := max T? (P ) .
(12)
???D
We can now bound C(P ) just in terms of the hitting time T (P ) and ?? (P ).
Proposition 3. Suppose A(i) = A for all i ? S and that r(i, a) ? [0, 1] for all i ? S, a ? A. Then
for any P ,
2|S||A|T (P )2
.
C(P ) ?
?? (P )
See the appendix for a proof.
4
3
The optimistic LP algorithm and its regret bound
Algorithm 1 Optimistic Linear Programming
1: for t = 0, 1, 2, . . . do
2:
st ? current state
3:
4:
5:
6:
7:
8:
9:
10:
. Compute solution for ?empirical MDP? excluding ?undersampled? actions
1+Nt (i,a,j)
?i, j ? S, a ? A(i), p?ti,j (a) ? |A(i)|+N
t (i,a)
?i ? S, Dt (i) ? {a ? A(i) : Nt (i, a) ? log2 Nt (i)}
? t, ?
? t ? solution of the optimality equations (3) with P = P? t , D = Dt
h
. Compute indices of all actions for the current state
q
log t
? t i : k?
}
?a ? A(st ), Ut (st , a) ? supq??+ {r(st , a) + hq, h
ptst (a) ? qk1 ? N2t (s
t ,a)
11:
12:
. Optimal actions (for the current problem) that are about to become ?undersampled?
13:
?1t ? {a ? O(st ; P? t , Dt ) : Nt (st , a) < log2 (Nt (st ) + 1)}
14:
15:
. The index maximizing actions
16:
?2t ? arg maxa?A(st ) Ut (st , a)
17:
18:
if ?1t = O(st ; P? t , Dt ) then
19:
at ? any action in ?1t
20:
else
21:
at ? any action in ?2t
22:
end if
23: end for
Algorithm 1 is the Optimistic Linear Programming algorithm. It is inspired by the algorithm of
Burnetas and Katehakis [1] but uses L1 distance instead of KL-divergence. At each time step t,
the algorithm computes the empirical estimates for transition probabilities. It then forms a restricted
problem ignoring relatively undersampled actions. An action a ? A(i) is considered ?undersam? t, ?
? t might be misleading due to estimation errors.
pled? if Nt (i, a) < log2 Nt (i). The solutions h
To avoid being misled by empirical samples we compute optimistic ?indices? Ut (st , a) for all legal
actions a ? A(st ) where st is the current state. The index for action a is computed by looking at
an L1 -ball around the empirical estimate p?tst (a) and choosing a probability distribution q that max? t ). Note that if the estimates were perfect, we would take an action maximizing
imizes L(i, a, q, h
t
?
L(i, a, p?st (a), ht ). Instead, we take an action that maximizes the index. There is one case where we
are forced not to take an index-maximizing action. It is when all the optimal actions of the current
problem are about to become undersampled at the next time step. In that case, we take one of these
actions (steps 18?22). Note that both steps 7 and 10 can be done by solving LPs. The LP for solving
optimality equations can be found in several textbooks (see, for example, [9, p. 391]). The LP in step
10 is even simpler: the L1 ball has only 2|S| vertices and so we can maximize over them efficiently.
Like the original Burnetas-Katehakis algorithm, the modified one also satisfies a logarithmic regret
bound as stated in the following theorem. Unlike the original algorithm, OLP does not need to know
the support sets of the transition distributions.
Theorem 4. Let ? denote the policy implemented by Algorithm 1. Then we have, for all i0 ? S and
for all P satisfying Assumption 1,
RT? (i0 , P )
? C(P ) ,
log T
T ??
where C(P ) is the MDP-dependent constant defined in (8).
lim sup
Proof. From Proposition 1 in [1], it follows that
X
X
?
RT? (i0 , P ) =
E?,P
i0 [NT (i, a)]? (i, a; P ) + O(1) .
i?S a?O(i;P,A)
/
5
(13)
Define the event
? t ? h? (P )k? ? ? O(P? t , Dt ) ? O(P )} .
At := {kh
(14)
Define,
NT1 (i, a; ) :=
T
?1
X
1 [(st , at ) = (i, a) ? At ? Ut (i, a) ? L? (i; P, A) ? 2] ,
t=0
NT2 (i, a; ) :=
T
?1
X
1 [(st , at ) = (i, a) ? At ? Ut (i, a) < L? (i; P, A) ? 2] ,
t=0
NT3 () :=
T
?1
X
1 A?t ,
t=0
where A?t denotes the complement of At . For all > 0,
NT (i, a) ? NT1 (i, a; ) + NT2 (i, a; ) + NT3 () .
(15)
The result then follows by combining (13) and (15) with the following three propositions and then
letting ? 0 sufficiently slowly.
Proposition 5. For all P and i0 ? S, we have
lim lim sup
?0 T ??
X
X
i?S a?O(i;P,A)
/
1
E?,P
i0 [NT (i, a; )] ?
? (i, a; P ) ? C(P ) .
log T
Proposition 6. For all P , i0 , i ? S, a ?
/ O(i; P, A) and sufficiently small, we have
2
E?,P
i0 [NT (i, a; )] = o(log T ) .
Proposition 7. For all P satisfying Assumption 1, i0 ? S and > 0, we have
3
E?,P
i0 [NT ()] = o(log T ) .
4
Proofs of auxiliary propositions
We prove Propositions 5 and 6. The proof of Proposition 7 is almost the same as that of Proposition 5
in [1] and therefore omitted (for details, see Chapter 6 in the first author?s thesis [8]). The proof of
Proposition 6 is considerably simpler (because of the use of L1 distance rather than KL-divergence)
than the analogous Proposition 4 in [1].
Proof of Proposition 5. There are two cases depending on whether (i, a) ? Crit(P ) or not. If
(i, a) ?
/ Crit(P ), there is an 0 > 0 such that MakeOpt(i, a; P, 0 ) = ?. On the event At (recall the
? t i ? hq, h? (P )i| ? for any q ? ?+ . Therefore,
definition given in (14)), we have |hq, h
? t i}
Ut (i, a) ? sup {r(i, a) + hq, h
q??+
? sup {r(i, a) + hq, h? (P )i} +
q??+
?
< L (i; P, A) ? 0 +
< L? (i; P, A) ? 2 provided that 3 < 0
[? MakeOpt(i, a; P, 0 ) = ?]
Therefore for < 0 /3, NT1 (i, a; ) = 0.
Now suppose (i, a) ? Crit(P ). The event Ut (i, a) ? L? (i; P, A) ? 2 is equivalent to
2 log t
+
t
2
? t i ? L? (i; P, A) ? 2 .
? r(i, a) + hq, h
?q ? ? s.t. k?
pi (a) ? qk1 ?
Nt (i, a)
? t i ? hq, h? (P )i| ? and thus the above implies
On the event At , we have |hq, h
2 log t
+
t
2
?q ? ? s.t. k?
pi (a) ? qk1 ?
? (r(i, a) + hq, h? (P )i ? L? (i; P, A) ? 3) .
Nt (i, a)
6
Recalling the definition (6) of Ji,a (p; P, ), we see that this implies
Ji,a (?
pti (a); P, 3) ?
2 log t
.
Nt (i, a)
We therefore have,
NT1 (i, a; )
T
?1
X
2 log t
t
?
1 (st , at ) = (i, a) ? Ji,a (?
pi (a); P, 3) ?
Nt (i, a)
t=0
T
?1
X
2 log t
+?
?
1 (st , at ) = (i, a) ? Ji,a (pi (a); P, 3) ?
Nt (i, a)
t=0
+
T
?1
X
(16)
1 (st , at ) = (i, a) ? Ji,a (pi (a); P, 3) > Ji,a (?
pti (a); P, 3) + ?
t=0
where ? > 0 is arbitrary. Each time the pair (i, a) occurs Nt (i, a) increases by 1, so the first count
is no more than
2 log T
.
(17)
Ji,a (pi (a); P, 3) ? ?
To control the expectation of the second sum, note that continuity of Ji,a in its first argument implies
that there is a function f such that f (?) > 0 for ? > 0, f (?) ? 0 as ? ? 0 and Ji,a (pi (a); P, 3) >
Ji,a (?
pti (a); P, 3) + ? implies that kpi (a) ? p?ti (a)k1 > f (?). By a Chernoff-type bound, we have,
for some constant C1 ,
?ti (a)k1 > f (?) | Nt (i, a) = m] ? C1 exp(?mf (?)2 ) .
P?,P
i0 [kpi (a) ? p
and so the expectation of the second sum is no more than
T
?1
X
E?,P
i0 [
?
X
C1 exp(?Nt (i, a)f (?)2 )] ?
t=0
C1 exp(?mf (?)2 ) =
m=1
C1
.
1 ? exp(?f (?)2 )
(18)
Combining the bounds (17) and (18) and plugging them into (16), we get
1
E?,P
i0 [NT (i, a; )] ?
2 log T
C1
+
.
Ji,a (pi (a); P, 3) ? ?
1 ? exp(?f (?)2 )
Letting ? ? 0 sufficiently slowly, we get that for all > 0,
1
E?,P
i0 [NT (i, a; )] ?
2 log T
+ o(log T ) .
Ji,a (pi (a); P, 3)
Therefore,
lim lim sup
?0 T ??
1
E?,P
2
2
i0 [NT (i, a; )]
? lim
=
,
?0 Ji,a (pi (a); P, 3)
log T
K(i, a; P )
where the last equality follows from the definition (7) of K(i, a; P ). The result now follows by
summing over (i, a) pairs in Crit(P ).
Proof of Proposition 6. Define the event
A0t (i, a; ) := {(st , at ) = (i, a) ? At ? Ut (i, a) < L? (i; P, A) ? 2} ,
so that we can write
NT2 (i, a; )
=
T
?1
X
1 [A0t (i, a; )] .
(19)
t=0
Note that on A0t (i, a; ), we have ?1t ? O(i; P? t , Dt ) ? O(i; P, A). So, a ?
/ O(i; P, A). But a was
taken at time t, so it must have been in ?2t which means it maximized the index. Therefore, for all
optimal actions a? ? O(i; P, A), we have, on the event A0t (i, a; ),
Ut (i, a? ) ? Ut (i, a) < L? (i; P, A) ? 2 .
7
Since L? (i; P, A) = r(i, a? ) + hpi (a? ), h? (P )i, this implies
s
2 log t
? t i < hpi (a? ), h? (P )i ? 2 .
? hq, h
?q ? ?+ , kq ? p?ti (a? )k1 ?
Nt (i, a? )
? t i ? hq, h? (P )i| ? . We therefore have, for any a? ? O(i; P, A),
Moreover, on the event At , |hq, h
s
(
)
2 log t
0
+
t
?
?
At (i, a; ) ? ?q ? ? , kq ? p?i (a)k1 ?
? hq, h (P )i < hpi (a), h (P )i ?
Nt (i, a)
s
)
(
2 log t
+
t
? kq ? pi (a)k1 > ?
? ?q ? ? , kq ? p?i (a)k1 ?
Nt (i, a)
kh (P )k?
s
(
)
2 log t
? k?
pti (a) ? pi (a)k1 > ?
+
h (P )
Nt (i, a)
s
)
(
t
[
2 log t
t
?
+
Nt (i, a) = m ? k?
pi (a) ? pi (a)k1 > ?
kh (P )k?
Nt (i, a)
m=1
Using a Chernoff-type bound, we have, for some constant C1 ,
P?,P
pti (a) ? pi (a)k1 > ? | Nt (i, a) = m] ? C1 exp(?m? 2 /2) .
i0 [k?
Using a union bound, we therefore have,
t
X
?
m
2
r
!2 ?
2 log t ?
m
+
? (P )k
kh
?
m=1
?
?
2m log t
1
C1 X
m2
=
o
?
?
exp ?
.
t m=1
2kh? (P )k2?
kh? (P )k?
t
0
P?,P
i0 [At (i, a; )] ?
C1 exp ??
Combining this with (19) proves the result.
References
[1] Burnetas, A.N. & Katehakis, M.N. (1997) Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research 22(1):222?255
[2] Auer, P. & Ortner, R. (2007) Logarithmic online regret bounds for undiscounted reinforcement learning.
Advances in Neural Information Processing Systems 19. Cambridge, MA: MIT Press.
[3] Lai, T.L. & Robbins, H. (1985) Asymptotically efficient adaptive allocation rules. Advances in Applied
Mathematics 6(1):4?22.
[4] Brafman, R.I. & Tennenholtz, M. (2002) R-MAX - a general polynomial time algorithm for near-optimal
reinforcement learning. Journal of Machine Learning Research 3:213?231.
[5] Auer, P. (2002) Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research 3:397?422.
[6] Auer, P., Cesa-Bianchi, N. & and Fischer, P. (2002) Finite-time analysis of the multiarmed bandit problem.
Machine Learning 47(2-3):235-256.
[7] Strehl, A.L. & Littman, M. (2005) A theoretical analysis of model-based interval estimation. In Proceedings
of the Twenty-Second International Conference on Machine Learning, pp. 857-864. ACM Press.
[8] Tewari, A. (2007) Reinforcement Learning in Large or Unknown MDPs. PhD thesis, Department of Electrical Engineering and Computer Sciences, University of California at Berkeley.
[9] Puterman, M.L. (1994) Markov Decision Processes: Discrete Stochastic Dynamic Programming. New
York: John Wiley and Sons.
8
| 3329 |@word exploitation:3 polynomial:1 seems:1 norm:3 simulation:1 contains:1 omniscient:1 current:5 nt:33 must:1 nt1:4 john:1 numerical:1 unichain:2 v:1 intelligence:1 accordingly:1 simpler:6 become:2 katehakis:5 prove:2 shorthand:1 inside:1 expected:5 roughly:1 themselves:1 behavior:1 inspired:2 provided:1 bounded:1 notation:1 maximizes:1 moreover:1 what:1 textbook:2 maxa:1 safely:1 berkeley:7 every:1 ti:4 k2:1 control:1 before:2 engineering:1 limit:2 optimistically:1 might:2 k:1 supq:1 unique:1 union:1 regret:22 universal:1 empirical:4 significantly:1 thought:1 convenient:1 confidence:2 n2t:1 get:5 close:1 context:1 optimize:1 restriction:1 deterministic:3 equivalent:1 maximizing:4 go:1 starting:2 convex:1 kpi:2 simplicity:1 rule:2 importantly:1 imizes:1 handle:1 notion:1 hurt:1 analogous:1 suppose:4 play:2 programming:6 us:2 satisfying:3 role:2 electrical:1 worst:3 trade:4 decrease:1 reward:16 littman:1 dynamic:1 solving:5 crit:7 division:2 k0:1 chapter:1 forced:1 kp:1 artificial:1 choosing:2 larger:1 solve:2 otherwise:1 ability:1 statistic:1 fischer:1 online:1 advantage:1 sequence:2 frequent:1 combining:3 achieve:1 description:2 kh:7 empty:1 undiscounted:1 perfect:1 depending:1 strong:1 implemented:1 c:2 involves:1 implies:6 come:1 auxiliary:1 closely:1 stochastic:1 exploration:3 require:1 preliminary:1 proposition:15 exploring:1 sufficiently:4 considered:1 around:1 exp:8 cb:3 claim:1 achieves:1 omitted:1 estimation:2 maker:1 robbins:1 mit:1 offs:1 modified:1 rather:2 avoid:1 greatly:1 criticism:1 sense:3 dependent:2 accumulated:3 i0:33 bandit:1 arg:1 among:2 denoted:2 multiplies:1 univeristy:1 ptst:1 never:2 chernoff:2 look:3 future:1 simplex:1 ortner:5 irreducible:5 few:1 divergence:3 recalling:1 hpi:4 kt:1 beforehand:1 experience:1 incomplete:1 theoretical:1 uncertain:1 vertex:1 subset:2 kq:4 optimally:2 burnetas:5 considerably:1 chooses:1 st:24 international:1 off:3 quickly:1 thesis:2 cesa:1 slowly:3 return:1 account:1 bka:3 matter:1 depends:2 optimistic:8 sup:7 complicated:1 who:1 efficiently:1 maximized:2 accurately:1 history:2 reach:2 definition:5 pp:1 proof:10 gain:2 proved:2 recall:1 knowledge:5 lim:8 ut:10 auer:7 appears:1 dt:6 done:1 though:2 generality:1 just:3 hand:2 eqn:2 replacing:1 continuity:1 infimum:1 brings:1 quality:2 mdp:15 grows:1 usage:1 usa:2 vicinity:1 equality:1 puterman:1 suboptimality:2 criterion:1 complete:2 l1:7 recently:1 behaves:1 ji:15 interpret:1 significant:1 refer:1 multiarmed:1 olp:14 cambridge:1 seldom:1 mathematics:2 hp:1 i6:1 reliability:1 halved:1 own:1 optimizing:1 inf:2 certain:2 vt:10 minimum:3 maximize:2 long:3 lai:1 plugging:1 expectation:3 c1:10 interval:2 else:1 crucial:2 unlike:1 call:3 near:1 easy:1 gave:1 irreducibility:1 perfectly:1 identified:1 suboptimal:4 simplifies:1 regarding:1 whether:1 a0t:4 optimism:1 bartlett:2 peter:1 york:1 action:35 tewari:2 estimated:2 per:1 write:1 discrete:1 key:1 four:1 nevertheless:1 tst:1 ce:1 ht:1 qk1:3 asymptotically:2 sum:3 uncertainty:2 almost:2 reasonable:1 decision:6 appendix:2 comparable:1 bit:1 bound:16 hi:1 pay:1 infinity:3 precisely:1 argument:1 optimality:6 min:3 relatively:1 department:2 ball:3 smaller:1 son:1 pti:5 lp:5 tw:5 making:3 intuitively:1 restricted:4 taken:2 computationally:1 equation:6 legal:1 turn:1 count:1 know:3 letting:2 end:2 available:3 operation:1 observe:1 away:1 occurrence:1 original:3 denotes:2 log2:3 exploit:1 k1:9 prof:1 pled:1 move:1 quantity:7 occurs:1 dependence:3 rt:8 hq:13 distance:4 considers:1 assuming:2 besides:1 modeled:1 index:7 difficult:1 stated:1 policy:28 unknown:2 twenty:1 bianchi:1 upper:1 markov:4 finite:1 behave:1 excluding:1 looking:1 arbitrary:1 complement:1 pair:7 required:1 kl:3 california:3 able:1 tennenholtz:1 below:4 program:3 ambuj:2 reliable:1 max:7 critical:3 event:7 undersampled:4 misled:1 representing:2 improve:1 misleading:1 mdps:8 inversely:1 literature:2 determining:2 loss:1 allocation:1 remarkable:1 agent:4 degree:2 principle:1 pi:25 strehl:1 changed:1 brafman:1 last:1 bias:4 understand:1 taking:2 face:1 benefit:1 dimension:1 transition:10 computes:1 author:1 adaptive:2 reinforcement:3 far:1 constituting:1 keep:1 rid:2 summing:1 conclude:1 assumed:1 triplet:1 ca:4 ignoring:1 necessarily:1 big:1 wiley:1 explicit:2 theorem:3 concern:1 exists:1 phd:1 confuse:1 flavor:1 mf:2 logarithmic:5 simply:2 explore:1 aoa:3 hitting:9 scalar:1 corresponds:1 satisfies:2 acm:1 ma:1 price:1 hard:1 judicious:1 determined:1 principal:1 called:2 total:1 meaningful:1 nt2:3 support:4 latter:1 evaluate:1 |
2,569 | 333 | Neural Dynamics of
Motion Segmentation and Grouping
Ennio Mingolla
Center for Adaptive Systems, and
Cognitive and Neural Systems Program
Boston University
111 Cummington Street
Boston, MA 02215
Abstract
A neural network model of motion segmentation by visual cortex is described. The model clarifies how preprocessing of motion signals by a
Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative motion mechanisms in a motion Cooperative Competitive Loop
(CC Loop) to control phenomena such as as induced motion, motion capture, and motion aftereffects. The total model system is a motion Boundary Contour System (BCS) that is computed in parallel with a static BCS
before both systems cooperate to generate a boundary representation for
three dimensional visual form perception. The present investigations clarify how the static BCS can be modified for use in motion segmentation problems, notably for analyzing how ambiguous local movements (the aperture
problem) on a complex moving shape are suppressed and actively reorganized into a coherent global motion signal.
1
INTRODUCTION: WHY ARE STATIC AND MOTION
BOUNDARY CONTOUR SYSTEMS NEEDED?
Some regions, notably MT, of visual cortex are specialized for motion processing.
However, even the earliest stages of visual cortex processing, such as simple cells in
VI, require stimuli that change through time for their maximal activation and are
direction-sensitive. Why has evolution generated regions such as MT, when even
VI is change-sensitive and direction-sensitive? What computational properties are
achieved by MT that are not already available in VI?
342
Neural Dynamics of Motion Segmentation and Grouping
The monocular Boundary Contour System (BCS) theory of Grossberg and Mingolla
(1985a, 1985b, 1987), and its binocular generalization (Grossberg, 1987, Grossberg
& Marshall, 1989), has modeled many boundary segmentation properties of VI
and its prestriate projections. The BCS has until now been used to analyze data
generated in response to static visual images. Henceforth I will therefore call such a
BCS a static BCS model. Nonetheless its model cells can be gated by cells sensitive
to image transients to generate receptive fields sensitive to visual motion. How does
a motion BCS differ from a static BCS whose cells are sensitive to image transients?
2
STATIC AND MOTION FILTERING:
DIRECTION-OF-CONTRAST AND
DIRECTION-OF-MOTION
That boundaries of opposite direction-of-contrast are perceptually linked is vividly
illustrated by the reverse-contrast Kanizsa square. A fundamental property of the
front end of the BCS, which is a Static Oriented Contrast Filter (SOC Filter), is
that its output is insensitive to direction-of-contrast, in order to support perception
of boundaries in variable illumination. This insensitivity is achieved through the
pooling by units identified with complex cells of information of units identified with
simple cells, whose receptive fields are elongated and sensitive to opposite contrast
polarities. The pooling implies that the complex cell layer of the SOC Filter is
insensitive to direction-of-motion, as well as to direction-of-contrast. Evidently, any
useful filter that will act as the front-end of a motion segmentation system must be
sensitive to direction-of-motion while being insensitive to direction-of-contrast.
3
GLOBAL SEGMENTATION AND GROUPING:
FROM LOCALLY AMBIGUOUS MOTION SIGNALS
TO COHERENT OBJECT MOTION SIGNALS
In their discussion of "velocity space," Adelson and Movshon (198~, 1982) introduce
diagrams similar to Figure la to illustrate local motion direction (and speed) ambiguity from information confined to an aperture. In Figure la the length of arrows
codes possible trajectories of a point which would be consistent with the measured
change of contrast over time of the cell in question; for this reason, it is sometimes
said that early cells are sensitive to only the normal component of velocity. Figure
IB shows another view of this situation; the length of arrows is roughly proportional to a cell's "prior probability distribution" for interpreting changing stimulation
as occurring in one of several directions, of which the direction perpendicular to
the cell receptive field's axis of orientation is locally prefened. Note that in this
conception, if a cell with an oriented receptive field (e.g. a simple cell) is being stimulated by an edge that is not perfectly aligned with its receptive field's dark-to-light
contrast axis, its "preferred direction" will not correspond to that perpendicular to
the edge. In this case, however, it is assumed that within a hypercolumn of cells
tuned to similar spatial frequency, contrast, and temporal parameters but varying
in preferred orientation, some other cell whose preferred orientation was more nearly aligned with the edge would generate a stronger signal than the cell in question.
Thus, the distribution of motion signals across cells tuned to aU orientations would
343
344
Mingolla
favor the direction perpendicular to the orientation of the edge .
.....
..
?.. ..
Figure 1: Motion direction ambiguity along an edge
4
STATIC AND MOTION COOPERATIVE GROUPING
The static BCS contains a process of for long-range completion, regularization,
and grouping which is mediated by a cooperative-competitive feedback loop (CC
Loop) whose competitive layer is identified with hypercomplex cells ofV2 and whose
cooperative layer contains units called "bipole cells," which are hypothesized to exist
in the projections of V2 cells. The CC Loop seeks to form and sharpen boundaries
whenever evidence from bottom-up inputs in two regions indicates that a collinear
(possibly curved) continuation of boundary activity is called for. A horizontally
tuned bipole cell sends feedback to horizontally tuned cells in the competitive layer.
In considering how the static CC Loop must be modified to deal with motion segmentation, consider that motion is not binary but continuously valued; headings
can be, for example, "north by northwest." The analysis of moving contours thus
requires one more degree of freedom than the analysis of static contours, for a contour of a given orientation can be moving in an infinity of directions, and conversely
contours of any orientation can be moving in the same direction; thus a modification
in the structure of the static BCS is required. Consider again the aperture problem.
In the barberpole illusion the perception of motion direction along entire contours
- whose measurement by cells with localized receptive fields is everywhere subject
to the aperture problem - is determined by the perceived motion of their endpoints
(Wallach,1976). Endstopping in simple cells of the MOC Filter can provide the
enhancement of signals from segment endpoints, enabling the cooperative bipole
cells of the motion CC Loop to reorganize the ambiguous local motion signals from
the interiors of the diagonal segments into signals that are consistent with those of
the endpoints.
5
GENERALIZING THE GROSSBERG-RUDD MOC
FILTER FOR SEGMENTATION AND GROUPING
The origina.l Grossberg & Rudd MOC Filter is illustra.ted in Figure 2. The goal is to
generalize certain of its functions to handle 2-D (two-dimensional) motion segmentation issues. The MOC Filter is insensitive to direction-of-contrast but sensitive
Neural Dynamics of Motion Segmentation and Grouping
to direction-of-motion. Levell registers the input pattern. Level 2 consists of sustained response cells with oriented receptive fields that are sensitive to direction-ofcontrast . Level 3 consists of transient response cells with unoriented receptive fields
that are sensitive to direction of change in the total cell input. Level 4 cells combine
sustained cell and transient cell signals to become sensitive to direction-of-motion
and sensitive to direction-of-contrast. Level 5 cells combine Level 4 cells to become
sensitive to direction-of-motion and insensitive to direction-of-contrast.
Level 5
Competition
Gate
time-
Level 2
sustained
I'
short-range
space-filter
Level 3
transient
'Levell
4
41
Figure 2: The Motion Oriented Contrast (MOC) Filter
The full domain of motion segmentation and grouping includes such problems a.<; determining structure in depth from motion, motion transparency, and motion grouping amid occlusion. Although the motion BCS is conceived with these and related
difficult phenomena in mind, I will instead focus on the elementary grouping operations necessary to perform detections of object motion within the visual field.
Even here difficult issues arise. Consider the lower right corner of a homogeneous
rectangular form of relatively high luminance that is moving diagonally upward
and to the right on a homogeneous background of relatively low luminance. (See
Figure 3a.) In region A dark-to-light (luminance increasing over time) transition
occurs at a vertical edge, while in region B a light-to-dark (luminance decreasing
over time) transition occurs at a horizontal edge. Both the regions of horizontal
and vertical contrast near the corner provide signals to the MOC Filter, provided
that the sustained cells of Level 2 (Figure 2) are taken to be spatially laid out as
indicated in Figure 3b. Over three successive time increments, the contours of the
345
346
Mingolla
rectangle of Figure 3a occur in the positions indicated, while luminance increases
along the vertical edge and decreases along the horizontal edge. If certain of the
sustained cell receptive fields sending inputs to Level 4 of the MOC Filter (Figure
2) were arranged as indicated, a diagonal motion signal could be generated from
both vertically and horizontally oriented cells, in conjunction with luminance gating
signals of opposite signs. (Of course, motion signals of many other directions will
also be generated along the lengths of the horizontal and vertical edges; these will
be considered subsequently.) In other words, for at least some of the gating nodes
of Layer 4 (Figure 2), the layout of receptive field centers of contributing sustained
cells of Layer 2 is taken to be in a direction diagonal to the orientational preference of the individual sustained cells. It would make no sense to build a motion
filter whose receptive field centers were arrayed collinearly with the contributing
sustained cell's orientational preference - although this type of arrangement might
be suitable for collinear completion in a static form system. Accordingly, it appears
that a variant of a "sine law" exists, whereby the contribution of any sustained cell
at Level 2 to Level 4 gating cell is modulated by the (absolute value of the) sine
of the angle formed between the sustained cell's orientational preference and the
gating cell's directional preference.
motion:
motion:
t=l
t=l
Ift=2
t=O
Ift=2
t=O
luminance
luminance
Figure 3: The corner of a light rectangle moving diagona.lly
The long range filter (Level 5, Figure 2) can simultaneously accept motion signals
from both the horizonta.l and vertical edges of the moving corner, despite the gating of one set of signals by transient "luminance increasing" detectors (Level 3,
Figure 2) and gating of another set by "luminance decreasing" detectors. Thus
while simultaneous increase and decrease of luminance is logically impossible in an
infinitesimal area, and while a too rapid change from increase to decrease may be
unresolvable by sustained cells at Level 2, the simultaneous nearby increase and decrease of luminance with a coherent trajectory or direction despite different contour
orientations is fodder for the long-range filter. Note that the long-range filter of the
MOC Filter is not the same as the long-range grouping stage of the CC Loop.
Neural Dynamics of Motion Segmentation and Grouping
6
ENDSTOPPING: GENERATION OF A
TERMINATOR OR CORNER ADVANTAGE IN
MOTION SIGNALS
In discussing the barberpole illusion I referred to an "advantage" for motion signals
near terminators or corners of contours. The designation "advantage" connotes that
those signals tend to be better indicators of object motion than signals generated
from a relatively straight interior of a contour. For this advantage to be manifest in
perception, however, that advantage must also be one of signal strength, the more
so because the regions or spatial extent of interior motion signals is often larger
than the region of terminator or corner signals. The source of the advantage would
appear to involve endstopping at the very front end of the Moe Filter. Many simple
cells, identified with the orientation and direction-of-contrast sensitive sustained
cells of Level 2 of the MaC Filter, exhibit endstopping (Dreher, 1972.) (Note that
this endstopping is functionally analogous to the first competitive stage of the SOC
Filter.) Strong endstopping, whereby only signals at terminators survive, can reduce
the problem of determining motion direction to one of tracking an isolated region of
activity. In the ca.<;e of weak endstopping considered here, however, surviving signals
indicating "locally preferred" directions can continue to confound the problem of
motion segmentation and grouping.
7
CONSENSUS AT CORNERS: GAUSSIAN SPACE
AVERAGING AND DIRECTIONAL COMPETITION
In the weak endstopping case the local motion signa.Js from the lower right corner of the moving rectangle would have roughly the form diagramed in Figure 4a.
While there is some preference for diagonal (up-and-to-the-right) signals, local motion signals of other directions also exist. (b) A mechanism is needed to combine
different directions signals into a single coherent local direction signal. (c) The signal combination can be accomplished by a motion analog of the second competitive
stage among orientations of the soc Filter, as described in Grossberg & MingoJla,
(1987). A excitatory on-center, inhibitory off-surround network organization among
cells coding different directions-of-motion at the same position can accomplish the
desired pooling and choice through competitive peak summation and sharpening.
Note that the domain of spatial averaging of the Gaussian filter (transition from
Level 4 to Level 5 of the MaC Filter) is presumed to be large enough to span the
signals generated by the ends of the leading vertical and trailing horizontal edges.
At Level 5, then, signals of many directions occur for cells coding the same position. Those directions will have the appropriate "central tendency", however, and
a simple center-surround competition in the space of directions, analogous to the
revised version of the second competitive stage (for orientations) of the static BCS
described by Grossberg & Mingolla (1987), suffices to choose the direction which is
most consistent with surrounding input data at each location. (See Figure 4b.)
In this article I have described motion analysis mechanisms whereby the visual
system frees itself from an excessive reliance on either purely local (short-range filtering) computations or top-down (cognitive or expectancy based) computations.
Instead, within a perceptual middle ground, competitive and cooperative interac-
347
348
Mingolla
tions withing a parallel and structured network with several scales of interaction
help to choose and enhance those aspects of local data which contribute to coherent
and consistent measures of object motion.
+
?
@
(b)
(c)
(a)
Figure 4: Resolution of ambiguous signals at corners
Ackllowledgelnents
The research described was performed jointly with Stephen Grossberg.
The author was supported in part by AFOSR F49620-87-C-0018.
References
Adelson, E. H. & Movshon, J. A. (1980). Journal of the Optical Society of A merica,
70, 1605.
Adelson, T. & Movshon, J. A. (1982). Nature, 300, 523-525.
Dreher, B. (1972) investigative Ophthamology, 11, 355-356.
Grossberg, S. (1987) Perception and Psychophysics, 41, 87-116.
Grossberg, S. & Marshall, J. (1989). Neural Networks, 2, 29-51.
Grossberg, S. & Mingolla, E. (1985a). Psychological Review, 92, 173-211.
Grossberg, S. & Mingolla, E. (1985b). Perception and Psychophysics, 38, 141-171.
Grossberg, S. & Mingolla, E. (1987). Computer Vision, Graphics, and Image PT'Ocessing, 37, 116-165.
Grossberg, S. & Rudd, M. (1989). Neural Networks, 2, 421-450.
Wallach, H. (1976). On perception. New York, Quadrangle.
| 333 |@word middle:1 version:1 stronger:1 horizonta:1 seek:1 contains:2 tuned:4 activation:1 must:3 arrayed:1 shape:1 accordingly:1 short:2 contribute:1 location:1 successive:1 preference:5 node:1 along:5 become:2 consists:2 sustained:12 combine:3 introduce:1 notably:2 presumed:1 rapid:1 roughly:2 decreasing:2 considering:1 increasing:2 provided:1 what:1 sharpening:1 temporal:1 act:1 control:1 unit:3 appear:1 before:1 local:8 vertically:1 despite:2 analyzing:1 might:1 au:1 wallach:2 conversely:1 range:8 perpendicular:3 grossberg:14 illusion:2 area:1 projection:2 word:1 bipole:3 interior:3 impossible:1 elongated:1 center:5 layout:1 rectangular:1 resolution:1 handle:1 increment:1 analogous:2 pt:1 homogeneous:2 velocity:2 cooperative:7 bottom:1 reorganize:1 capture:1 region:9 movement:1 decrease:4 dynamic:4 segment:2 purely:1 surrounding:1 investigative:1 whose:7 larger:1 valued:1 favor:1 jointly:1 itself:1 advantage:6 evidently:1 interaction:1 maximal:1 aligned:2 loop:8 insensitivity:1 competition:3 enhancement:1 quadrangle:1 object:4 tions:1 illustrate:1 help:1 completion:2 measured:1 strong:1 soc:4 implies:1 differ:1 direction:41 filter:25 subsequently:1 transient:6 require:1 suffices:1 generalization:1 investigation:1 elementary:1 summation:1 clarify:1 considered:2 ground:1 normal:1 ennio:1 trailing:1 early:1 perceived:1 sensitive:16 dreher:2 gaussian:2 modified:2 varying:1 conjunction:1 earliest:1 focus:1 indicates:1 logically:1 contrast:18 sense:1 entire:1 accept:1 upward:1 issue:2 among:2 orientation:11 spatial:3 psychophysics:2 field:12 ted:1 adelson:3 survive:1 nearly:1 excessive:1 stimulus:1 oriented:6 simultaneously:1 individual:1 occlusion:1 freedom:1 detection:1 organization:1 light:4 edge:12 necessary:1 desired:1 withing:1 isolated:1 ocessing:1 psychological:1 marshall:2 mac:2 front:3 too:1 graphic:1 interac:1 accomplish:1 fundamental:1 peak:1 off:1 enhance:1 continuously:1 again:1 ambiguity:2 central:1 choose:2 possibly:1 amid:1 henceforth:1 cognitive:2 corner:10 leading:1 actively:1 coding:2 north:1 includes:1 register:1 vi:4 sine:2 view:1 performed:1 analyze:1 linked:1 competitive:9 parallel:2 vividly:1 contribution:1 square:1 formed:1 clarifies:1 correspond:1 directional:2 generalize:1 weak:2 trajectory:2 cc:6 straight:1 detector:2 simultaneous:2 whenever:1 infinitesimal:1 nonetheless:1 frequency:1 static:15 manifest:1 segmentation:14 orientational:3 appears:1 endstopping:8 response:3 arranged:1 stage:5 binocular:1 until:1 horizontal:5 indicated:3 hypothesized:1 evolution:1 regularization:1 spatially:1 illustrated:1 deal:1 ambiguous:4 whereby:3 cummington:1 motion:65 interpreting:1 cooperate:1 image:4 specialized:1 stimulation:1 mt:3 endpoint:3 insensitive:5 analog:1 unoriented:1 functionally:1 measurement:1 surround:2 sharpen:1 moving:8 cortex:3 j:1 reverse:1 certain:2 binary:1 continue:1 discussing:1 accomplished:1 signal:33 stephen:1 full:1 bcs:14 transparency:1 levell:2 long:6 moc:9 variant:1 vision:1 lly:1 sometimes:1 confined:1 achieved:2 cell:48 background:1 diagram:1 source:1 sends:1 induced:1 pooling:3 subject:1 tend:1 call:1 surviving:1 near:2 conception:1 enough:1 identified:4 opposite:3 perfectly:1 reduce:1 collinear:2 movshon:3 mingolla:9 york:1 useful:1 involve:1 dark:3 locally:3 generate:3 continuation:1 exist:2 inhibitory:1 sign:1 conceived:1 prestriate:1 reliance:1 changing:1 rectangle:3 luminance:12 angle:1 everywhere:1 laid:1 rudd:3 layer:6 activity:2 strength:1 occur:2 infinity:1 nearby:1 aspect:1 speed:1 span:1 optical:1 relatively:3 structured:1 combination:1 across:1 suppressed:1 modification:1 confound:1 taken:2 aftereffect:1 monocular:1 mechanism:3 needed:2 mind:1 end:4 sending:1 available:1 operation:1 v2:1 appropriate:1 gate:1 northwest:1 top:1 build:1 society:1 already:1 question:2 occurs:2 arrangement:1 receptive:11 diagonal:4 said:1 exhibit:1 street:1 extent:1 consensus:1 reason:1 length:3 code:1 modeled:1 polarity:1 difficult:2 reorganized:1 gated:1 perform:1 vertical:6 revised:1 enabling:1 curved:1 situation:1 kanizsa:1 hypercolumn:1 required:1 moe:1 coherent:5 perception:7 pattern:1 program:1 suitable:1 indicator:1 axis:2 mediated:1 prior:1 review:1 determining:2 contributing:2 law:1 afosr:1 generation:1 designation:1 filtering:2 proportional:1 localized:1 degree:1 consistent:4 signa:1 article:1 course:1 ift:2 diagonally:1 excitatory:1 supported:1 free:1 heading:1 absolute:1 f49620:1 boundary:9 feedback:2 depth:1 transition:3 contour:12 author:1 adaptive:1 preprocessing:1 expectancy:1 hypercomplex:1 preferred:4 aperture:4 global:2 assumed:1 why:2 stimulated:1 nature:1 ca:1 complex:3 domain:2 terminator:4 arrow:2 arise:1 referred:1 position:3 perceptual:1 ib:1 down:1 gating:6 evidence:1 grouping:13 exists:1 perceptually:1 illumination:1 occurring:1 boston:2 generalizing:1 visual:8 horizontally:3 tracking:1 joined:1 ma:1 goal:1 change:5 determined:1 averaging:2 total:2 called:2 tendency:1 la:2 indicating:1 support:1 modulated:1 phenomenon:2 |
2,570 | 3,330 | Distributed Inference for Latent Dirichlet Allocation
David Newman, Arthur Asuncion, Padhraic Smyth, Max Welling
Department of Computer Science
University of California, Irvine
newman,asuncion,smyth,welling @ics.uci.edu
Abstract
We investigate the problem of learning a widely-used latent-variable model ? the
Latent Dirichlet Allocation (LDA) or ?topic? model ? using distributed computation, where each of processors only sees of the total data set. We propose two distributed inference schemes that are motivated from different perspectives. The first scheme uses local Gibbs sampling on each processor with periodic
updates?it is simple to implement and can be viewed as an approximation to
a single processor implementation of Gibbs sampling. The second scheme relies on a hierarchical Bayesian extension of the standard LDA model to directly
account for the fact that data are distributed across processors?it has a theoretical guarantee of convergence but is more complex to implement than the approximate method. Using five real-world text corpora we show that distributed
learning works very well for LDA models, i.e., perplexity and precision-recall
scores for distributed learning are indistinguishable from those obtained with
single-processor learning. Our extensive experimental results include large-scale
distributed computation on 1000 virtual processors; and speedup experiments of
learning topics in a 100-million word corpus using 16 processors.
1 Introduction
Very large data sets, such as collections of images, text, and related data, are becoming increasingly
common, with examples ranging from digitized collections of books by companies such as Google
and Amazon, to large collections of images at Web sites such as Flickr, to the recent Netflix customer
recommendation data set. These data sets present major opportunities for machine learning, such
as the ability to explore much richer and more expressive models, as well as providing new and
interesting domains for the application of learning algorithms.
However, the scale of these data sets also brings significant challenges for machine learning, particularly in terms of computation time and memory requirements. For example, a text corpus with 1
million documents, each containing 1000 words on average, will require approximately 12 Gbytes
of memory to store the
words, which is beyond the main memory capacity for most single processor machines. Similarly, if one were to assume that a simple operation (such as computing a
probability vector over categories using Bayes rule) would take on the order of
sec per word,
then a full pass through
words will take 1000 seconds. Thus, algorithms that make multiple
passes over this sized corpus (such as occurs in many clustering and classification algorithms) will
have run times in days.
An obvious approach for addressing these time and memory issues is to distribute the learning
algorithm over multiple processors [1, 2, 3]. In particular, with processors, it is somewhat trivial
to get around the memory problem by distributing of the total data to each processor. However,
the computation problem remains non-trivial for a fairly large class of learning algorithms, namely
how to combine local processing on each of the processors to arrive at a useful global solution.
1
In this general context we investigate distributed learning algorithms for the LDA model [4]. LDA
models are arguably among the most successful recent learning algorithms for analyzing count data
such as text. However, they can take days to learn for large corpora, and thus, distributed learning
would be particularly useful for this type of model.
The novel contributions of this paper are as follows:
We introduce two algorithms that perform distributed inference for LDA models, one of
which is simple to implement but does not necessarily sample from the correct posterior
distribution, and the other which optimizes the correct posterior quantity but is more complex to implement and slower to run.
We demonstrate that both distributed algorithms produce models that are statistically indistinguishable (in terms of predictive power) from models obtained on a single-processor, and
they can learn these models much faster than using a single processor and only requiring
storage of th of the data on each processor.
2 Latent Dirichlet Allocation
Before introducing our distributed algorithms for LDA, we briefly review the standard LDA model.
LDA models each of
documents as a mixture over latent topics, each being a multinomial
distribution over a word vocabulary. For document , we first draw a mixing proportion
from a Dirichlet with parameter . For the
word in the document, a topic
is drawn with topic
chosen with probability
, then word
is drawn from the
topic, with
taking on value
with probability . Finally, a Dirichlet prior with parameter is placed on the topics .
Thus, the generative process is given by
"!$# %
)
*
& '"!$# (%
&
+ ,.-0/
(1)
Given the observed words 1+2
3
, the task of Bayesian inference is to compute the posterior
distribution over the latent topic indices 4*2
, the mixing proportions
, and the topics
. An efficient procedure is to use collapsed Gibbs sampling [5], where and are marginalized
out, and the latent variables 4 are sampled. Given the current state of all but one variable
, the
conditional probability of
is
8
: : :
576
2
4 9
1 7;=<
6
6 ?>A@ 9
; 6 $
$>A@ 9 ;
>A@ B9 -C/ ; D
(2)
data-item is excluded in the count values, and
where the superscript E7F means the corresponding
:
where @
GH 2JI KL&
2 M
2
. We use the convention that missing indices are summed
out: @N
O2+P @
GQ and @ R2SP
@
GQ .
3 Distributed Inference Algorithms for LDA
We now present two versions of
LDA where the data and the parameters
are distributed over distinct
processors. We distribute the documents over processors, with 2UT documents on each
:WVXVWVX: :WVWVXVW:
1
1Y
1
processor. We partition the data 1 (words from the documents) into 1A2
:WVWVXVW: :WVWVXVW:
and the corresponding topic assignments into 4Z2
4
4 Y
4 , where 1 Y and 4 Y only exist
on processor 5 . Document-specific counts @[
are likewise distributed, however every processor
maintains its own copy of word-topic and topic counts, @7 and @ . We denote processor-specific
:
counts as @N
Y @ Y and @ Y .
3.1 Approximate Distributed Inference
In our Approximate Distributed LDA model (AD-LDA), we simply implement LDA on each processor, and simultaneous Gibbs sampling is performed independently on each of the processors,
as if each processor thinks it is the only processor. On processor 5 , given the current state of all but
one variable )
Y , the topic assignment to the
word in document ,
Y]\ 4 Y is sampled from:
8
Y : : :
576 )
Y 2
4 Y9
1 ^;_<
Y 6
Y
6 `>a@ 9
Y ; 6 $
?>A@ 9 Y ;
Y >a@ B9 C- / Y ; D
2
(3)
?
?
?w|k
?p
? k| j
?
? k| jp
Zij
?k
?w|k
Zijp
?w|kp
X ij
Nj
X ijp
N jp
P
K
D
K
Dp
P
Figure 1: (Left) Graphical model for LDA. (Right) Graphical model for HD-LDA. Variables are repeated over
the indices of the random variables. Square boxes indicate parameters.
Note that @ Y is not the result of separate LDA models running on separate data. In particular
P @ Y 2
, where
is the total number of words across all processors, as opposed to the
number of words on processor 5 . After processor 5 has reassigned 4 Y , we have modified counts
@N
Y , @ Y , and @ Y . To merge back to a single set of counts, after a number of Gibbs sampling
steps (e.g., after a single pass through the data on each processor) we perform the global update,
using a reduce-scatter operation,
@
@ >
Y
6 @ Y
@ ; :
@ Y
@
(4)
where @N are the counts that all processors started with before the sweep of the Gibbs sampler.
The counts @ are computed by @ 2 P @ . Note that this global update correctly reflects the
topic assignments 4 (i.e., @N can also be regenerated using 4 ).
We can consider this algorithm to be an approximation to the single-processor Gibbs sampler in
the following sense: at the start of each iteration, all of the processors have the same set of counts.
However, as each processor starts sampling, the global count matrix is changing in a way that is
unknown to each processor. Thus, in Equation 3, the sampling is not being done according to the true
current global count (or true posterior distribution), but to an approximation. We have experimented
with ?repairing? reversibility of the sampler by adding a phase which re-traces the Gibbs moves
starting at the (global) end-state, but we found that, due to the curse-of-dimensionality, virtually all
steps ended up being rejected.
3.2 Hierarchical Distributed Inference
A more principled way to model parallel processes is to build them directly into the probabilistic
model. Imagine a parent collection of topics . This parent has children Y which represent
the topic distributions on the various processors. We assume is sampled from according to a
Dirichlet distribution with topic-dependent strength parameter . The model that lives on each
processor is simply an LDA model. Hence, the generative process is given by,
_ "!$# &%
! # _ M%
$
YZ"
Y
[ Y
:
_# %
Y "
!$# Y %
Y 7
, -C/
Y _#
:
%
(5)
The graphical model corresponding to this Hierarchical Distributed LDA (HD-LDA) is shown on
the right of Figure 1, with standard LDA shown on the left for comparison. This model is different
than the two other topic hierarchies we found in the literature, namely 1) the deeper version of the
hierarchical Dirichlet process mentioned in [6] and 2) Pachinko allocation [7]. The first places a
deeper hierarchical prior on (instead of on ) while the second deals with a document-specific
hierarchy of topic-assignments. These types of hierarchies do not suit our need to facilitate parallel
computation.
3
As is the case for LDA, inference for HD-LDA is most efficient if we marginalize out and . We
derive the following conditional probabilities necessary for the Gibbs sampler,
8
Y : : : :
576 )
Y 2
4 Y9
1 ; <
6 Y >A@ 9
Y ; 6 B -C/ >A@ B9
Y ; 6 >A@ 9
Y ;
Y
Y
C- /W Y
(6)
In our experiments we learn MAP estimates for the global variables , and . Alternatively,
one can derive Gibbs sampling equations using the auxiliary variable method explained in [6], but
we leave exploration of this inference technique for future research. Inference is thus based on
integrating out and , sampling 4 and learning the MAP value of , and . The entire algorithm
can be understood as expectation maximization on a collapsed space where the M-step corresponds
to MAP-updates and the E-step corresponds to sampling. As such, the proposed Monte Carlo EM
(MCEM) algorithm is guaranteed to converge in expectation (e.g., [8]). The MAP learning rules are
derived by using the bounds derived in [9]. They are given by
6
6
G Y >A@N
Y ;
Y;
>AJP
# 6
Y >A@
Y;
6 Y;%
=>A P Y _ 6 _ >a@ Y ;
6 _ ;
> P Y # 6 >A@ Y;
6 ;%
=> P Y _ 6 _ >a@ Y ;
6 _ ;
_
(
> P Y _ 6 _ >a@ Y ;
6 _ ;
Y
=>AY P
(7)
where is the digamma function. Careful selection of hyper-parameters is critical to making HDLDA work well, and we used our experience with AD-LDA to guide these choices. For AD-LDA
, so we choose and to make the mode
P @ Y 2
, but for HD-LDA P @ Y
. We set 2 > . Finally we choose and to make the mode of ^Y 2 V ,
of 2
matching the value of used in our LDA and AD-LDA experiments.
We can view HD-LDA as a mixture model with LDA mixture components, where the data have
been hard-assigned to their respective clusters (processors). The parameters of the clusters are generated from a shared prior distribution. This view clarifies the procedure we have adopted for testing: First we sample assignment variables
Y for the first half of the test document (analogous to
folding-in). Given these samples we compute the likelihood of the test document under the model for
each processor. Assuming equal prior weights for each processor we then compute responsibilities,
which are given by the likelihoods, normalized over processors. The probability of the remainder of
the test document is then given by the responsibility-weighted average over the processors.
4 Experiments
The two distributed algorithms are initialized by first randomly assigning topics to 4 , then from this
counting topics in documents, @^
Y , and words in topics, @N Y , for each processor. Recall for
AD-LDA that the count arrays @N Y 2J@ are the same on every processor (initially, and after
every global update). For each run of each algorithm, a sample was taken after 500 iterations of
the Gibbs sampler, well after the typical burn-in period of 200-300 iterations. Multiple processors
were simulated in software (by separating data, running sequentially through each processor, and
simulating the global update step), except for the speedup experiments which were run on a 16processor computer.
It is not obvious a priori that the AD-LDA algorithm will in general converge to a useful result.
Later in this section we describe a set of systematic empirical results with AD-LDA, but we first
use an illustrative toy example to provide some insight as to how AD-LDA learns a model. The toy
example has 2
words, 2
topics. The left panel of Figure 2 shows the
distance between
the model?s estimate of a particular topic-word distribution and the true distribution, as a function
of Gibbs iterations, for both single-processor LDA and AD-LDA with 2
. LDA and AD-LDA
have qualitatively the same 3-phase learning dynamics1 . The first 4 or so iterations (?early burnin?) correspond to somewhat random movement close to the randomly initialized starting point. In
1
For clarity, the results in this figure are plotted for a single run, single data set, etc.?we observed qualitatively similar results over a large variety of such simulations
4
0.4
0.35
0.4
LDA
AD?LDA proc1
AD?LDA proc2
early burn?in
LDA
AD?LDA proc1
AD?LDA proc2
L1 norm
0.3
start
0.35
0.25
0.3
burn?in
0.2
start
LDA
AD?LDA proc1
AD?LDA proc2
AD?LDA proc3
...etc...
AD?LDA proc10
0.35
0.3
0.15
0.25
0.25
0.1
equilibrium
0.05
0.2
0.2
0
0
5
10
15
Iteration
20
25
topic mode
topic mode
30
0.55
0.6
0.65
0.7
0.55
0.6
0.65
0.7
Figure 2: (Left)
distance to the mode for LDA and for P=2 AD-LDA. (Center) Projection of
topics onto simplex, showing convergence to mode. (Right) Same setup as center panel, but with
2 processors.
the next phase (?burn-in?) both algorithms rapidly move in parameter space towards the posterior
mode. And finally at equilibrium, both are sampling around the mode. The center panel of Figure 2
plots the same run, in the 2-d planar simplex corresponding to the 3-word topic distribution. This
panel shows the paths in parameter space of each model, taking a few small steps near the starting
point (top right corner), moving down to the true solution (bottom left), and then sampling near the
posterior mode for the rest of the iterations. For each Gibbs iteration, the parameters corresponding
to each of the two individual processors, and those parameters after merging, are shown (for ADLDA). We observed that after the initial few iterations, the individual processor steps and the merge
step each resulted in a move closer to the mode. The right panel in Figure 2 illustrates the same
qualitative behavior as in the center panel, but now for 10 processors. One might worry that the
AD-LDA algorithm would get ?trapped? close to the initial starting point, e.g., due to repeated
label mismatching of the topics across processors. In practice we have consistently observed that
the algorithm quickly discards such configurations (due to the stochastic nature of the moves) and
?latches? onto a consistent labeling that then rapidly moves it towards the posterior mode.
It is useful to think of LDA as an approximation to stochastic descent in the space of assignment
variables 4 . On a single processor, one can view Gibbs sampling during burn-in as a stochastic
algorithm to move up the likelihood surface. With multiple processors, each processor computes an
upward direction in its own subspace, keeping all other directions fixed. The global update step then
recombines these directions by vector-addition, in the same way as one would compute a gradient
using finite differences. This is expected to be accurate as long as the surface is locally convex
or concave, but will break down at saddle-points. We conjecture AD-LDA works reliably because
saddle points are 1) unstable and 2) rare due to the fact that the posterior appears often to be highly
peaked for LDA models and high-dimensional count data sets.
To evaluate AD-LDA and HD-LDA systematically, we measured performance using test set per6
5^6 1 test ;.; . For every test document, half the words
plexity, computed as Perp 6 1 test ; 2
test
(at random) are put in a fold-in part, and the remaining words are put in a test part. The document
mix
is learned using the fold-in part, and log probability is computed using this mix and words
from the test part, ensuring that the test words are never seen before being used. For AD-LDA, the
perplexity computation exactly follows that of LDA, since a single set of topic counts @[ are saved
when a sample is taken. In contrast, all copies of @^ Y are required to compute perplexity for
HD-LDA, as described in the previous section. Except where stated, perplexities are computed for
all algorithms using a2 samples from the posterior (from 10 different chains) using
5^6 1
test
; 2
2
?>A@
$>A@
2
> @
D$>a@
(8)
with the analogous expression being used for HD-LDA.
We compared LDA (Gibbs sampling on a single processor) and our two distributed algorithms, ADLDA and HD-LDA, using three data sets: KOS (from dailykos.com), NIPS (from books.nips.cc)
and NYTIMES (from ldc.upenn.edu). Each data set was split into a training set and a test set. Size
parameters
for these data sets are shown in Table 1. For each corpus is the vocabulary size and
is the total number of words. Using the three data sets and the three models we computed test set
5
train
test
KOS
3000
6906
410,000
430
NIPS
1500
12,419
1,900,000
184
NYTIMES
300,000
102,660
100,000,000
34,658
Table 1: Size parameters for the three data sets used in perplexity and speedup experiments.
perplexities for a range of topics , and for number of processors, , ranging from 10 to 1000 for
our distributed models.
1750
2000
T=8
T=10
1700
1900
1600
1800
T=16
Perplexity
Perplexity
1650
LDA
AD?LDA
HD?LDA
1550
1500
T=20
1600
T=32
LDA
AD?LDA
HD?LDA
1700
T=40
1450
1400
1500
T=64
T=80
1350
P=1
P=10
1400
P=100
P=1
P=10
P=100
Figure 3: Test perplexity of models versus number of processors P for KOS (left) and NIPS (right).
P=1 corresponds to LDA (circles), and AD-LDA (crosses), and HD-LDA (squares) are shown at
P=10 and 100 .
Figure 3 clearly shows that, for a fixed number of topics, the perplexity results are essentially the
same whether we use single-processor LDA or either of the two algorithms with data distributed
across multiple processors (either 10 or 100). The figure shows the test set perplexity for KOS
(left) and NIPS (right), versus number of processors, . The 2 perplexity is computed by
LDA (circles), and we use our distributed models ? AD-LDA (crosses), and HD-LDA (squares) ?
to compute the 2 and 2 perplexities. Though not shown, perplexities for AD-LDA
remained approximately constant as the number of processors was further increased to 2
for KOS and 2 for NIPS, demonstrating effective distributed learning with only 3 documents
on each processor. It is worth emphasizing that, despite no formal convergence guarantees, the
approximate distributed algorithm converged to good solutions in every single one of the more than
one thousand experiments we did using five real-world data sets, plus synthesized data sets designed
to be ?hard? to learn (i.e., topics mutually exclusively distributed over processors)?page limitations
preclude a full description of all these results in this paper.
2000
2000
LDA
AD?LDA P=10
AD?LDA P=100
HD?LDA P=10
HD?LDA P=100
1950
1800
1700
Perplexity
1900
Perplexity
LDA
AD?LDA P=10
HD?LDA P=10
1900
1850
1800
1600
1500
1400
1300
1200
1750
1100
1700
50
100
150
200
250
Iteration
300
350
400
Figure 4: (Left) Test perplexity versus iteration.
1000
0
100
200
300
400
Number of Topics
500
600
700
(Right) Test perplexity versus number of topics.
To properly determine the utility of the distributed algorithms, it is necessary to check whether the
parallelized samplers are systematically converging more slowly than single processor sampling. If
6
16
0.5
TF?IDF
LDA
AD?LDA
HD?LDA
0.45
0.4
Perfect
AD?LDA
14
12
Speedup
Precision
0.35
0.3
0.25
0.2
10
8
6
0.15
4
0.1
2
0.05
0
2
AP
FR
Figure 5: (Left) Precision/recall results.
4
6
8
10
12
Number of Processors, P
14
16
(Right). Parallel speedup results.
this were the case, it would mitigate the computational gains of parallelization. In fact our experiments consistently showed (somewhat surprisingly) that the convergence rate for the distributed
algorithms is just as rapid as for the single processor case. As an example, Figure 4 (left) shows
test perplexity versus iteration number of the Gibbs sampler (NIPS, 2 ). During burn-in, up
to iteration 200, the distributed models are actually converging slightly faster than single processor
LDA. Also note that 1 iteration of AD-LDA (or HD-LDA) on a parallel computer takes a fraction of
the wall-clock time of 1 iteration of LDA.
We also investigated whether the results were sensitive to the number of topics used in the models,
e.g., perhaps the distributed algorithms? performance diverges when the number of topics becomes
very large. Figure 4 (right) shows the test set perplexity computed on the NIPS data set using
samples, as a function of the number of topics, for the different algorithms and a fixed
2
number of processors 2 (not shown here are the results for the KOS data set which were quite
similar). The perplexities of the different algorithms closely track each other as varies. Sometimes
the distributed algorithms produce slightly lower perplexities than those of single processor LDA.
This lower perplexity may be due to: for AD-LDA, parameters constantly splitting and merging
producing an internal averaging effect; and for HD-LDA, test perplexity being computed using
copies of saved parameters.
Finally, to demonstrate that the low perplexities obtained from the distributed algorithms with 2
processors are not just due to averaging effects, we split the NIPS corpus into one hundred 15document
collections, and ran LDA separately on each of these hundred collections. Test perplexity
) computed by averaging 100-separate LDA models was 2117, versus the P=100 test
( 2
perplexity of 1575 for AD-LDA and HD-LDA. This shows that simple averaging of results from
separate processors does not perform nearly as well as the distributed coordinated learning.
Our distributed algorithms also perform well under other performance metrics. We performed precision/recall calculations using TREC?s AP and FR collections and measured performance using the
well-known mean average precision (MAP) metric used in IR research. Figure 5 (left) again shows
that AD-LDA and HD-LDA (both using P=10) perform similarly to LDA. All three LDA models
have significantly higher precision than TF-IDF on the AP and FR collections (significance was
computed using a t-test at the 0.05 level). These calculations were run with 2
.
The per-processor per-iteration time and space complexity of LDA and AD-LDA are shown
in Ta
ble 2. AD-LDA?s memory requirement scales well as collections grow, because while
and
can get arbitrarily large (which can be offset by increasing ), the vocabulary
size asymptotes.
Similarly
the
time
complexity
scales
well
since
the
leading
order
term
is
divided by . The
term accounts for the communication cost of the reduce-scatter operation on the count difference
6 @ Y
@ ; , which is executed in
stages. Because of the additional term, parallel
efficiency will depend on
, with increasing efficiency as this ratio increases. Space and time
complexity of HD-LDA are similar to that of AD-LDA, but HD-LDA has bigger constants.
Using our large NYTIMES data set, we performed speedup experiments on a 16-processor SMP
shared memory computer using 2 1, 2, 4, 8 and 16 processors (since we did not have access
to a distributed memory computer). The single processor LDA run with 1000 iterations for this
data set involves flops, and takes more than 10 days on a 3GHz workstation, so it is an ideal
7
Space
Time
LDA
>A 6 >
;
AD-LDA
6
; >A
>A
> >
Table 2: Space and time complexity of LDA and AD-LDA.
computation to speed up. The speedup results, shown in Figure 5 (right), show reasonable parallel
V
efficiency, with a speedup using 2 processors. This speedup reduces our NYTIMES 10day run (880 sec/iteration on 1 processor) to the order of 1 day (105 sec/iteration on 16 processors).
Note, however, that while the implementation on an SMP machine captures some distributed effects
(e.g. time to synchronize), it does not accurately reflect the extra time for communication. However,
we do expect that for problems with large , parallel efficiency will be high.
5 Discussion and Conclusions
Prior work on parallelizing probabilistic learning algorithms has focused largely on EMoptimization algorithms, e.g., parallel updates of expected sufficient statistics for mixture models
[2, 1]. In the statistical literature, the idea of running multiple MCMC chains in parallel is one approach to parallelization (e.g., the method of parallel tempering), but requires that each processor
store a copy of the full data set. Since MCMC is inherently sequential, parallel sampling using
distributed subsets of the data will not in general yield a proper MCMC sampler except in special
cases [10]. Mimno and McCallum [11] recently proposed the DCM-LDA model, where processorspecific sets of topics are learned independently on each processor for local subsets of data, without
any communication between processors, followed by a global clustering of the topics from the different processors. While this method is highly scalable, it does not lead to single global set of topics
that represent individual documents, nor is it defined by a generative process.
We proposed two different approaches to distributing MCMC sampling across different processors
for an LDA model. With AD-LDA we sample from an approximation to the posterior density by
allowing different processors to concurrently sample latent topic assignments on their local subsets
of the data. Despite having no formal convergence guarantees, AD-LDA works very well empirically and is easy to implement. With HD-LDA we adapt the underlying LDA model to map to
the distributed computational infrastructure. While this model is more complicated than AD-LDA,
and slower to run (because of digamma evaluations), it inherits the usual convergence properties of
MCEM. Careful selection of hyper-parameters was critical to making HD-LDA work well.
In conclusion, both of our proposed algorithms learn models with predictive performance that is no
different than single-processor LDA. On each processor they burn-in and converge at the same rate
as LDA, yielding significant speedups in practice. The space and time complexity of both models
make them scalable to run on enormous problems, for example, collections with billions to trillions
of words. There are several potentially interesting research directions that can be pursued using
the algorithms proposed here as a starting point, e.g., using asynchronous local communication (as
opposed to the environment of synchronous global communications covered in this paper) and more
complex schemes that allow data to adaptively move from one processor to another. The distributed
scheme of AD-LDA can also be used to parallelize other machine learning algorithms. Using the
same principles, we have implemented distributed versions of NMF and PLSA, and initial results
suggest that these distributed algorithms also work well in practice.
6 Acknowledgements
This material is based upon work supported by the National Science Foundation: DN and PS were
supported by NSF grants SCI-0225642, CNS-0551510, and IIS-0083489, AA was supported by an
NSF graduate fellowship, and MW was supported by grants IIS-0535278 and IIS-0447903.
8
References
[1] C. Chu, S. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, and K. Olukotun. Map-Reduce for machine
learning on multicore. In NIPS 19, pages 281?288. MIT Press, Cambridge, MA, 2007.
[2] W. Kowalczyk and N. Vlassis. Newscast EM. In NIPS 17, pages 713?720. MIT Press, Cambridge, MA, 2005.
[3] A. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: Scalable online
collaborative filtering. In 16th International World Wide Web Conference, 2007.
[4] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[5] T. Griffiths and M. Steyvers. Finding scientific topics. In Proceedings of the National Academy
of Sciences, volume 101, pages 5228?5235, 2004.
[6] Y.W. Teh, M. Jordan, M. Beal, and A. Blei. Sharing clusters among related groups: Hierarchical Dirichlet processes. In NIPS 17, pages 1385?1392. MIT Press, Cambridge, MA, 2005.
[7] W. Li and A. McCallum. Pachinko allocation: DAG-structured mixture models of topic correlations. In ICML, pages 577?584, 2006.
[8] G. Wei and M. Tanner. A Monte Carlo implementation of the EM algorithm and the poor man?s
data augmentation algorithms. Journal of the American Statistical Association, 85(411):699?
704, 1990.
[9] T. Minka. Estimating a Dirichlet distribution. http://research.microsoft.com/ minka/papers/dirichlet/,
2003.
[10] A. Brockwell.
Parallel markov chain monte carlo simulation by pre-fetching.
J.Comp.Graph.Stats, volume 15, pages 246?261, 2006.
In
[11] A. McCallum D. Mimno. Organizing the oca: Learning faceted subjects from a library of
digital books. In Joint Conference in Digital Libraries, pages 376?385, 2007.
9
| 3330 |@word version:3 briefly:1 proportion:2 norm:1 plsa:1 simulation:2 initial:3 configuration:1 score:1 zij:1 exclusively:1 document:19 o2:1 current:3 z2:1 com:2 scatter:2 assigning:1 chu:1 partition:1 asymptote:1 plot:1 designed:1 update:8 generative:3 half:2 pursued:1 item:1 mccallum:3 blei:2 infrastructure:1 five:2 dn:1 qualitative:1 combine:1 introduce:1 upenn:1 expected:2 rapid:1 faceted:1 behavior:1 nor:1 company:1 curse:1 preclude:1 increasing:2 becomes:1 estimating:1 underlying:1 panel:6 finding:1 nj:1 ended:1 guarantee:3 mitigate:1 every:5 concave:1 exactly:1 grant:2 producing:1 arguably:1 before:3 understood:1 local:5 perp:1 despite:2 analyzing:1 parallelize:1 path:1 becoming:1 approximately:2 merge:2 might:1 burn:7 plus:1 ap:3 garg:1 datar:1 range:1 statistically:1 graduate:1 testing:1 practice:3 implement:6 procedure:2 empirical:1 significantly:1 matching:1 projection:1 word:24 integrating:1 griffith:1 pre:1 fetching:1 suggest:1 get:3 onto:2 marginalize:1 selection:2 close:2 storage:1 collapsed:2 context:1 put:2 map:7 customer:1 missing:1 center:4 starting:5 independently:2 convex:1 focused:1 amazon:1 splitting:1 stats:1 rule:2 insight:1 array:1 hd:25 steyvers:1 analogous:2 imagine:1 hierarchy:3 smyth:2 us:1 particularly:2 observed:4 bottom:1 capture:1 thousand:1 news:1 movement:1 ran:1 principled:1 mentioned:1 nytimes:4 environment:1 complexity:5 depend:1 mcem:2 predictive:2 upon:1 efficiency:4 joint:1 various:1 train:1 distinct:1 describe:1 effective:1 monte:3 kp:1 newman:2 repairing:1 hyper:2 labeling:1 quite:1 richer:1 widely:1 ability:1 statistic:1 think:2 superscript:1 online:1 beal:1 propose:1 remainder:1 fr:3 uci:1 rapidly:2 organizing:1 mixing:2 brockwell:1 academy:1 description:1 billion:1 convergence:6 parent:2 requirement:2 cluster:3 diverges:1 produce:2 p:1 perfect:1 leave:1 derive:2 measured:2 multicore:1 ij:1 auxiliary:1 implemented:1 involves:1 indicate:1 convention:1 direction:4 closely:1 correct:2 saved:2 stochastic:3 exploration:1 virtual:1 material:1 require:1 wall:1 extension:1 around:2 b9:3 ic:1 equilibrium:2 major:1 early:2 a2:2 label:1 sensitive:1 tf:2 reflects:1 weighted:1 mit:3 clearly:1 concurrently:1 e7:1 modified:1 derived:2 inherits:1 properly:1 consistently:2 likelihood:3 check:1 digamma:2 contrast:1 kim:1 sense:1 inference:10 dependent:1 entire:1 initially:1 upward:1 issue:1 classification:1 among:2 priori:1 oca:1 summed:1 fairly:1 special:1 equal:1 never:1 having:1 reversibility:1 sampling:17 ng:2 yu:1 icml:1 nearly:1 peaked:1 future:1 regenerated:1 simplex:2 few:2 randomly:2 resulted:1 national:2 individual:3 phase:3 cns:1 microsoft:1 suit:1 bradski:1 investigate:2 highly:2 evaluation:1 mixture:5 yielding:1 personalization:1 chain:3 accurate:1 closer:1 arthur:1 necessary:2 experience:1 respective:1 initialized:2 re:1 plotted:1 circle:2 theoretical:1 increased:1 assignment:7 maximization:1 cost:1 introducing:1 addressing:1 subset:3 rare:1 hundred:2 successful:1 varies:1 periodic:1 adaptively:1 density:1 international:1 probabilistic:2 systematic:1 tanner:1 quickly:1 ijp:1 again:1 reflect:1 augmentation:1 padhraic:1 containing:1 opposed:2 choose:2 slowly:1 corner:1 book:3 american:1 leading:1 toy:2 li:1 account:2 distribute:2 sec:3 coordinated:1 ad:46 performed:3 view:3 break:1 responsibility:2 later:1 netflix:1 bayes:1 maintains:1 start:4 parallel:12 complicated:1 asuncion:2 contribution:1 collaborative:1 square:3 ir:1 largely:1 likewise:1 rajaram:1 clarifies:1 correspond:1 yield:1 bayesian:2 accurately:1 carlo:3 worth:1 cc:1 comp:1 processor:89 converged:1 simultaneous:1 flickr:1 sharing:1 minka:2 obvious:2 workstation:1 irvine:1 sampled:3 gain:1 recall:4 ut:1 dimensionality:1 actually:1 back:1 worry:1 appears:1 higher:1 ta:1 day:5 planar:1 wei:1 done:1 box:1 though:1 rejected:1 just:2 stage:1 clock:1 correlation:1 web:2 expressive:1 y9:2 google:2 mode:11 brings:1 lda:126 perhaps:1 scientific:1 facilitate:1 effect:3 requiring:1 true:4 normalized:1 hence:1 assigned:1 excluded:1 deal:1 indistinguishable:2 latch:1 during:2 illustrative:1 plexity:1 demonstrate:2 l1:1 image:2 ranging:2 novel:1 recently:1 common:1 smp:2 multinomial:1 ji:1 empirically:1 jp:3 volume:2 million:2 association:1 synthesized:1 significant:2 cambridge:3 gibbs:16 dag:1 similarly:3 moving:1 access:1 surface:2 etc:2 posterior:10 own:2 recent:2 showed:1 perspective:1 optimizes:1 perplexity:27 discard:1 store:2 arbitrarily:1 life:1 seen:1 additional:1 somewhat:3 parallelized:1 converge:3 determine:1 period:1 ii:3 full:3 multiple:6 mix:2 reduces:1 faster:2 adapt:1 calculation:2 cross:2 long:1 lin:1 divided:1 bigger:1 ensuring:1 converging:2 scalable:3 ko:6 essentially:1 expectation:2 metric:2 iteration:19 represent:2 sometimes:1 folding:1 addition:1 fellowship:1 separately:1 grow:1 parallelization:2 rest:1 extra:1 pass:1 subject:1 virtually:1 jordan:2 near:2 counting:1 ideal:1 mw:1 split:2 easy:1 variety:1 reduce:3 idea:1 synchronous:1 whether:3 motivated:1 expression:1 utility:1 distributing:2 useful:4 covered:1 locally:1 category:1 http:1 exist:1 nsf:2 trapped:1 per:3 correctly:1 track:1 group:1 demonstrating:1 enormous:1 drawn:2 tempering:1 changing:1 clarity:1 olukotun:1 graph:1 fraction:1 run:11 arrive:1 place:1 reasonable:1 draw:1 ble:1 bound:1 guaranteed:1 followed:1 fold:2 strength:1 idf:2 software:1 speed:1 conjecture:1 speedup:10 department:1 structured:1 according:2 poor:1 mismatching:1 across:5 slightly:2 increasingly:1 em:3 making:2 explained:1 taken:2 equation:2 mutually:1 remains:1 count:16 end:1 adopted:1 operation:3 hierarchical:6 kowalczyk:1 simulating:1 slower:2 top:1 dirichlet:11 include:1 clustering:2 running:3 graphical:3 opportunity:1 marginalized:1 remaining:1 build:1 yz:1 sweep:1 move:7 quantity:1 occurs:1 usual:1 gradient:1 dp:1 subspace:1 distance:2 separate:4 simulated:1 capacity:1 separating:1 sci:1 topic:44 unstable:1 trivial:2 assuming:1 index:3 providing:1 ratio:1 setup:1 executed:1 potentially:1 trace:1 stated:1 implementation:3 reliably:1 proper:1 unknown:1 perform:5 allowing:1 teh:1 markov:1 finite:1 descent:1 flop:1 vlassis:1 communication:5 digitized:1 trec:1 parallelizing:1 nmf:1 david:1 namely:2 required:1 kl:1 extensive:1 california:1 learned:2 nip:12 beyond:1 challenge:1 max:1 memory:8 ldc:1 power:1 critical:2 synchronize:1 scheme:5 library:2 started:1 text:4 review:1 prior:5 literature:2 acknowledgement:1 expect:1 interesting:2 limitation:1 allocation:6 filtering:1 versus:6 digital:2 foundation:1 sufficient:1 consistent:1 principle:1 systematically:2 placed:1 surprisingly:1 copy:4 keeping:1 asynchronous:1 supported:4 guide:1 formal:2 deeper:2 allow:1 wide:1 taking:2 distributed:42 ghz:1 mimno:2 vocabulary:3 world:3 pachinko:2 computes:1 collection:10 qualitatively:2 gbytes:1 welling:2 approximate:4 global:13 sequentially:1 corpus:7 alternatively:1 latent:9 table:3 reassigned:1 learn:5 nature:1 inherently:1 investigated:1 complex:3 necessarily:1 domain:1 da:1 did:2 significance:1 main:1 repeated:2 child:1 site:1 precision:6 jmlr:1 learns:1 down:2 remained:1 emphasizing:1 specific:3 showing:1 offset:1 experimented:1 adding:1 merging:2 sequential:1 illustrates:1 simply:2 explore:1 saddle:2 recommendation:1 aa:1 corresponds:3 relies:1 constantly:1 trillion:1 ma:3 dcm:1 conditional:2 viewed:1 sized:1 careful:2 towards:2 shared:2 man:1 hard:2 typical:1 except:3 sampler:8 averaging:4 dailykos:1 total:4 newscast:1 pas:2 experimental:1 burnin:1 internal:1 evaluate:1 mcmc:4 |
2,571 | 3,331 | TrueSkill Through Time:
Revisiting the History of Chess
Pierre Dangauthier
INRIA Rhone Alpes
Grenoble, France
pierre.dangauthier@imag.fr
Ralf Herbrich
Microsoft Research Ltd.
Cambridge, UK
rherb@microsoft.com
Tom Minka
Microsoft Research Ltd.
Cambridge, UK
minka@microsoft.com
Thore Graepel
Microsoft Research Ltd.
Cambridge, UK
thoreg@microsoft.com
Abstract
We extend the Bayesian skill rating system TrueSkill to infer entire time
series of skills of players by smoothing through time instead of filtering.
The skill of each participating player, say, every year is represented by a
latent skill variable which is affected by the relevant game outcomes that
year, and coupled with the skill variables of the previous and subsequent
year. Inference in the resulting factor graph is carried out by approximate
message passing (EP) along the time series of skills. As before the system
tracks the uncertainty about player skills, explicitly models draws, can deal
with any number of competing entities and can infer individual skills from
team results. We extend the system to estimate player-specific draw margins. Based on these models we present an analysis of the skill curves of
important players in the history of chess over the past 150 years. Results
include plots of players? lifetime skill development as well as the ability to
compare the skills of different players across time. Our results indicate that
a) the overall playing strength has increased over the past 150 years, and
b) that modelling a player?s ability to force a draw provides significantly
better predictive power.
1
Introduction
Competitive games and sports can benefit from statistical skill ratings for use in matchmaking as well as for providing criteria for the admission to tournaments. From a historical
perspective, skill ratings also provide information about the general development of skill
within the discipline or for a particular group of interest. Also, they can give a fascinating
narrative about the key players in a given discipline, allowing a glimpse at their rise and fall
or their struggle against their contemporaries.
In order to provide good estimates of the current skill level of players skill rating systems
have traditionally been designed as filters that combine a new game outcome with knowledge
about a player?s skill from the past to obtain a new estimate. In contrast, when taking a
historical view we would like to infer the skill of a player at a given point in the past when
both their past as well as their future achievements are known.
The best known such skill filter based rating system is the Elo system [3] developed by Arpad
Elo in 1959 and adopted by the World Chess Federation FIDE in 1970 [4]. Elo models the
1
1 ?s2
probability of the game outcome as P (1 wins over 2|s1 , s2 ) := ? s?
where s1 and s2
2?
are the skill ratings of each player, ? denotes the cumulative density of a zero-mean unitvariance Gaussian and ? is the assumed variability of performance around skill. Denote the
game outcomes by y = +1 if player 1 wins, y = ?1 if player 2 wins and y = 0 if a draw
occurs. Then the resulting (linearised) Elo update is given by s1 ? s1 + y?, s2 ? s2 ? y?
and
?
y+1
? P (1 wins over 2|s1 , s2 ) ,
? = ?? ?
| {z }
2
K?Factor
where 0 < ? < 1 determines how much the filter weighs the new evidence versus the old
estimate.
The TrueSkill rating system [6] improves on the Elo system in a number of ways. TrueSkill?s
current belief about a player?s skill is represented by a Gaussian distribution with mean ?
and variance ? 2 . As a consequence, TrueSkill does not require a provisional rating period
and converges to the true skills of players very quickly. Also, in contrast to Elo, TrueSkill
explicitly models the probability of draws. Crucially for its application in the Xbox Live
online gaming system (see [6] for details) it can also infer skills from games with more than
two participating entities and infers individual players? skills from the outcomes of team
games.
As a skill rating and matchmaking system TrueSkill operates as a filter as discussed above.
However, due to its fully probabilistic formulation it is possible to extend Trueskill to perform
smoothing on a time series of player skills. In this paper we extend TrueSkill to provide
accurate estimates of the past skill levels of players at any point in time taking into account
both their past and their future achievements. We carry out a large-scale analysis of about
3.5 million games of chess played over the last 150 years.
The paper is structured as follows. In Section 2 we review previous work on historical
chess ratings. In Section 3 we present two models for historical ratings through time, one
assuming a fixed draw margin and one estimating the draw margin per player per year.
We indicate how large scale approximate message passing (EP) can be used to efficiently
perform inference in these huge models. In Section 4 we present experimental results on a
huge data set from ChessBase with over 3.5 million games and gain some fascinating chess
specific insights from the data.
2
Previous Work on Historical Chess Ratings
Estimating players? skills in retrospective allows one to take into account more information
and hence can be expected to lead to more precise estimates. The pioneer in this field
was Arpad Elo himself, when he encountered the necessity of initializing the skill values of
the Elo system when it was first deployed. To that end he fitted a smooth curve to skill
estimates from five-year periods; however little is known about the details of his method [3].
Probably best known in the chess community is the Chessmetrics system [8], which aims
at improving the Elo scores by attempting to obtain a better fit with the observed data.
Although constructed in a very thoughtful manner, Chessmetrics is not a statistically wellfounded method and is a filtering algorithm that disregards information from future games.
The first approach to the historical rating problem with a solid statistical foundation was
developed by Mark Glickman, chairman of the USCF Rating Committee. Glicko 1 & 2 [5]
are Bayesian rating systems that address a number of drawbacks of the Elo system while
still being based on the Bradley-Terry paired-comparison method [1] used by modern Elo.
Glickman models skills as Gaussian variables whose variances indicate the reliability of the
skill estimate, an idea later adopted in the TrueSkill model as well. Glicko 2 adds volatility
measures, indicating the degree of expected fluctuation in a player?s rating. After an initial
estimate past estimations are smoothed by propagating information back in time.
The second statistically well founded approach are Rod Edwards?s Edo Historical Chess
Ratings [2], which are also based on the Bradley-Terry model but have been applied only to
historical games from the 19th century. In order to model skill dynamics Edwards considers
2
the same player at different times as several distinct players, whose skills are linked together
by a set of virtual games which are assumed to end in draws. While Edo incorporates a
dynamics model via virtual games and returns uncertainty measures in terms of the estimator?s variance it is not a full Bayesian model and provides neither posterior distributions
over skills, nor does it explicitly model draws.
In light of the above previous work on historical chess ratings the goal of this paper is to
introduce a fully probabilistic model of chess ratings through time which explicitly accounts
for draws and provides posterior distributions of skills that reflect the reliability of the
estimate at every point in time.
3
Models for Ranking through Time
This paper strongly builds on the original TrueSkill paper [6]. Although TrueSkill is applicable to the case of multiple team games, we will only consider the two player case for this
application to chess. It should be clear, however, that the methods presented can equally
well be used for games with any number of teams competing.
Consider a game such as chess in which a number of, say, N players {1, . . . , N } are competing
over a period of T time steps, say, years. Denote the seriesof game outcomes
between two
t
t
players i and j in year t by yij
(k) ? {+1, ?1, 0} where k ? 1, . . . , Kij
denotes the number
of game outcomes available for that pair of players in that year. Furthermore, let y = +1 if
player i wins, y = ?1 if player j wins and y = 0 in case of a draw.
3.1
Vanilla TrueSkill
In the Vanilla TrueSkill system, each player i is assumed to have an unknown skill
t
sti ? R at time t. We assume that a game outcome yij
(k) is generated as follows. For
t
each of the two players i and j performances pij (k) and ptji (k) are drawn according to
t
(k) of the game between players i and
p ptij (k) |sti = N ptij (k) ; sti , ? 2 . The outcome yij
j is then determined as
?
? +1 if ptij (k) > ptji (k) + ?
t
?1 if ptij (k) > ptij (k)+ ? ,
yij (k) :=
?
0 if ptij (k) ? ptji (k) ? ?
where the parameter ? > 0 is the draw margin. In order to infer the unknown
skills s ti the
0
0
2
TrueSkill model assumes a factorising Gaussian prior p si = N si ; ?0 , ?0 over skills and a
Gaussian drift of skills between time steps given by p sti |st?1
= N st ; st?1 , ? 2 . The model
i
can be well described as a factor graph (see Figure 1, left) which clarifies the factorisation
assumptions of the model and allows to develop efficient (approximate) inference algorithms
based on message passing (for details see [6])
In the Vanilla TrueSkill algorithm denoting the winning player by W and the losing player by
L and dropping the time index for now, approximate Bayesian inference (Gaussian density
filtering [7]) leads to the following update equations for ?W , ?L , ?W and ?L .
s
2
?2
?W
?W ? ? L ?
?W ? ? L ?
?W ? ?W +
?v
,
and ?W ? ?W 1 ? W
?
w
,
cij
cij
cij
c2ij
cij
cij
s
?2
?2
?W ? ? L ?
?W ? ? L ?
?L ? ?L ? L ? v
,
and ?L ? ?L 1 ? 2L ? w
,
.
cij
cij
cij
cij
cij
cij
2
2
The overall variance is c2ij = 2? 2 + ?W
+ ?L
and the two functions v and w are given by
N (t ? ?; 0, 1)
v (t, ?) :=
and w (t, ?) := v (t, ?) ? (v (t, ?) + (t ? ?)) .
? (t ? ?)
For the case of a draw we have the following update equations:
s
?i2
?i ? ? i ?
?i2
?i ? ? i ?
? v?
,
and ?i ? ?i 1 ? 2 ? w
?
,
,
?i ? ?i +
cij
cij
cij
cij
cij
cij
3
and similarly for player j. Defining d := ? ? t and s := ? + t then v? and w
? are given by
v? (t, ?) :=
N (?s; 0, 1) ? N (d; 0, 1)
? (d) ? ? (?s)
and w
? (t, ?) := v?2 (t, ?) +
(d) N (d; 0, 1) ? (s) N (s; 0, 1)
.
? (d) ? ? (?s)
In order to approximate the skill parameters ?ti and ?it for all players i ? {1, . . . , N } at all
times t ? {0, . . . , T } the Vanilla TrueSkill algorithm initialises each skill belief with ? 0i ? ?0
and ?i0 ? ?0 . It then proceeds through the years t ? {1 . . . T } in order, goes through
t
the game outcomes yij
(k) in random order and updates the skill beliefs according to the
equations above.
3.2
TrueSkill through Time (TTT)
The Vanilla TrueSkill algorithm suffers from two major disadvantages:
1. Inference within a given year t depends on the random order chosen for the updates.
Since no knowledge is assumed about game outcomes within a given year, the results
of inference should be independent of the order of games within a year.
2. Information across years is only propagated forward in time. More concretely, if
player A beats player B and player B later turns out to be very strong (i.e., as
evidenced by him beating very strong player C repeatedly), then Vanilla TrueSkill
cannot propagate that information backwards in time to correct player A?s skill
estimate upwards.
Both problems can be addressed by extending the Gaussian density filtering to running full
expectation propagation (EP) until convergence [7]. The basic idea is to update repeatedly
on the same game outcomes but making sure that the effect of the previous update on that
game outcome is removed before the new effect is added. This way, the model remains the
same but the inferences are less approximate.
t
More specifically, we go through the game outcomes yij
within a year t several times until
t
convergence. The update for a game outcome yij (k) is performed in the same way as before
but saving the upward messages mf (pt (k),st )?st (sti ) which describe the effect of the updated
ij
i
i
t
performance ptij (k) on the underlying skill sti . When game outcome yij
(k) comes up for
t
update again, the new downward message mf (pt (k),st )?pt (k) pij (k) can be calculated by
ij
ij
i
Z ?
p (sti )
mf (pt (k),st )?pt (k) ptij (k) =
f ptij (k) , sti
dst ,
ij
i
ij
mf (pt (k),st )?st (sti ) i
??
ij
i
i
thus effectively dividing out the earlier upward message to avoid double counting. The
integral above is easily evaluated since the messages as well as the marginals p (s ti ) have
been assumed Gaussian. The new downward message serves as the effective prior belief on
the performance pti (k). At convergence, the dependency of the inferred skills on the order
of game outcomes vanishes.
The second problem is addressed by performing inference for TrueSkill through time (TTT),
i.e. by repeatedly smoothing forward and backward in time. The first forward pass of TTT
is identical to the inference pass of Vanilla TrueSkill except that the forward messages
mf (st?1 ,st )?st (sti ) are stored. They represent the influence of skill estimate st?1
at time
i
i
i
i
t ? 1 on skill estimate sti at time t. In the backward pass, these
messages are then used to
calculate the new backward messages mf (st?1 ,st )?st?1 st?1
,
which
effectively serve as the
i
i
i
i
new prior for time step t ? 1,
Z ?
p (sti )
t?1
mf (st?1 ,st )?st?1 si
=
f st?1
, sti
dst .
i
i
i
i
mf (st?1 ,st )?st (sti ) i
??
i
i
i
This procedure is repeated forward and backward along the time series of skills until convergence. The backward passes make it possible to propagate information from the future
into the past.
4
st?1
W
?
st?1
L
?
st?1
W
?
st?1
L
?
?
?
?
?
stW
stL
stW
stL
?
?
?
?t?1
L
>0
?
?t?1
i
?
?
?
?tL
?ti
>0
?
?
?
pW
pL
pW
pL
?
?
?
?
sti
stj
>0
?
?
?tj
?
?
?
?
?t?1
j
st?1
j
st?1
i
?
?
+
+
pi
pj
+
uL
ui
?
?
uj
?
?
d
d
di
dj
>0
>0
>0
<0
Figure 1: Factor graphs of single game outcomes for TTT (left) and TTT-D. In the left graph
there are three types of variables: skills s, performances p, performance differences d. In the
TTT-D graphs there are two additional types: draw margins ? and winning thresholds u.
?
The graphs only require three different types of factors: factor ? takes the form N ?; ?, ? 2 ,
factor > 0 takes the form I (? > 0) and factor ? takes the form I (? ? ? = ?).
3.3
TTT with Individual Draw Margins (TTT-D)
From exploring the data it is known that the probability of draw not only increases markedly
through the history of chess, but is also positively correlated with playing skill and even
varies considerably across individual players. We would thus like to extend the TrueSkill
model to incorporate another player-specific parameter which indicates a player?s ability to
force a draw. Suppose each player i at every time-step t is characterised by an unknown skill
sti ? R and a player-specific draw margin ?ti > 0. Again, performances ptij (k) and ptji (k)
are drawn according to p ptij (k) |sti = N ptij (k) ; sti , ? 2 . In this model a game outcome
t
yij
(k) between players i and j at time t is generated as follows:
?
? +1
t
?1
yij
(k) =
? 0
if
if
if
ptij (k) > ptji (k) + ?tj
ptji (k) > ptij (k) + ?ti
t
??i ? ptij (k) ? ptji (k) ? ?tj
.
In addition to the Gaussian assumption about player skills as in the Vanilla TrueSkill model
of Section 3.1 we assume a factorising
Gaussian distribution for the player-specific draw
margins p ?0i = N ?0i ; ?0 , ?02 and a Gaussian drift of draw margins between time steps
given by p ?ti |?t?1
= N ?t ; ?t?1 , ? 2 . The factor graph for the case of win/loss is shown
i
in Figure 1 (centre) and for the case of a draw in Figure 1 (right). Note, that the positivity
of the player-specific draw margins at each time step t is enforced by a factor > 0 .
Inference in the TTT-D model is again performed by expectation propagation, both within
a given year t as well as across years in a forward backward manner. Note that in this
model the current belief about the skill of a player is represented by four numbers: ? ti and
?it for the skill and ?it and ?it for the player-specific draw margin. Players with a high value
of ?it can be thought of as having the ability to achieve a draw against strong players, while
players with a high value of ?ti have the ability to achieve a win.
5
5
x 10
2.5
Frequency
2
1.5
1
0.5
0
1850
1872
1894
1916
1938
1960
1982
2004
Year
Figure 2: (Left) Distribution over number of recorded match outcomes played per year
in the ChessBase database. (Right) The log-evidence P (y|?, ? ) for the TTT model as a
function of the variation of player performance, ?, and skill dynamics, ? . The maximizing
parameter settings are indicated by a black dot.
4
Experiments and Results
Our experiments are based on a data-set of chess match outcomes collected by ChessBase 1 .
This database is the largest top-class annotated database in the world and covers more than
3.5 million chess games from 1560 to 2006 played between ?200,000 unique players. From
this database, we selected all the matches between 1850 (the birth of modern Chess) and
2006. This results in 3,505,366 games between 206,059 unique players. Note that a large
proportion of games was collected between 1987 and 2006 (see Figure 2 (left)).
Our implementation of the TrueSkill through Time algorithms was done in F# 2 and builds
a factor graph with approximately 11,700,000 variables and 15,200,000 factors (TTT) or
18,500,000 variables and 27,600,000 factors (TTT-D). The whole schedule allocates no more
than 6 GB (TTT) or 11 GB (TTT-D) and converges in less than 10 minutes (TTT)/20
minutes (TTT-D) of CPU time on a standard Pentium 4 machine. The code for this
analysis will be made publicly available.
In the first experiment, we built the TTT model for the above mentioned collection of Chess
games. The draw margin was chosen such that the a-priori probability of draw between two
equally skilled players matches the overall draw probability of 30.3%. Moreover, the model
has a translational invariance in the skills and a scale invariance in ?/? 0 and ? /?0 . Thus,
we fixed ?0 = 1200, ?0 = 400 and computed the log-evidence L := P (y|?, ? ) for varying
values of ? and ? (see Figure 2 (right)). The plots show that the model is very robust to
setting these two parameters except if ? is chosen too small. Interestingly, the log-evidence is
neither largest for ? 0 (complete de-coupling) nor for ? ? 0 (constant skill over life-time)
indicating that it is important to model the dynamics of Chess players. Note that the logevidence is LTTT = ?3, 953, 997, larger than that of the naive model (Lnaive = ?4, 228, 005)
which always predicts 30.3% for a draw and correspondingly for win/loss3 . In a second
experiment, we picked the optimal values (? ? , ? ? ) = (480, 60) for TTT and optimised the
remaining prior and dynamics parameters of TTT-D to arrive at a model with a log-evidence
of LTTT?D = ?3, 661, 813.
In Figure 3 we have plotted the skill evolution for some well?known players of the last 150
years when fitting the TTT model (?t , ? t are shown). In Figure 4 the skill evolution of
the same players is plotted when fitting the TTT-D model; the dashed lines show ? t + ?t
1
For more information, see http://www.bcmchess.co.uk/softdatafrcb.html.
For more details, see http://research.microsoft.com/fsharp/fsharp.aspx.
3
Leakage due to approximate inference.
2
6
Figure 3: Skill evolution of top Chess players with TTT; see text for details.
whereas the solid lines display ?t ; for comparisons we added the ?t of the TTT model as
dotted lines.
As a first observation, the uncertainties always grow towards the beginning and end of a
career since they are not constrained by past/future years. In fact, for Bobby Fischer the
uncertainty grows very large in his 20 years of inactivity (1972?1992). Moreover, there seems
to be a noticeable increase in overall skill since the 1980?s. Looking at Figure 4 we see that
players have different abilities to force a draw; the strongest player to do so is Boris Spassky
(1937?). This ability got stronger after 1975 which explains why the model with a fixed
draw margin estimates Spassky?s skill larger.
Looking at individual players we see that Paul Morphy (1837?1884), ?The Pride and Sorrow
of Chess?, is particularly strong when comparing his skill to those of his contemporaries in
the next 80 years. He is considered to have been the greatest chess master of his time, and
this is well supported by our analysis. ?Bobby? Fischer (1943?) tied with Boris Spassky at
the age of 17 and later defeated Spassky in the ?Match of the Century? in 1972. Again,
this is well supported by our model. Note how the uncertainty grows during the 20 years of
inactivity (1972?1992) but starts to shrink again in light of the (future) re-match of Spassky
and Fischer in 1992 (which Fischer won). Also, Fischer is the only one of these players
whose ?t decreased over time?when he was active, he was known for the large margin by
which he won!
Finally, Garry Kasparov (1963?) is considered the strongest Chess player of all time. This is
well supported by our analysis. In fact, based on our analysis Kasparov is still considerably
stronger than Vladimir Kramnik (1975?) but a contender for the crown of strongest player
in the world is Viswanathan Anand (1969?), a former FIDE world champion.
5
Conclusion
We have extended the Bayesian rating system TrueSkill to provide player ratings through
time on a unified scale. In addition, we introduced a new model that tracks player-specific
draw margins and thus models the game outcomes even more precisely. The resulting factor
graph model for our large ChessBase database of game outcomes has 18.5 million nodes and
27.6 million factors, thus constituting one of the largest non-trivial Bayesian models ever
7
Skill (Variable Draw Margin)
Skill + Draw Margin
Skill (Fixed Draw Margin)
3500
Anand; Viswanathan
Kasparov; Garry
3000
Skill estimate
Fischer; Robert James
Kramnik; Vladimir
Eichborn; Louis
Karpov; Anatoly
2500
Botvinnik; Mikhail
Spassky; Boris V
Morphy; Paul
Steinitz; William
2000
Lasker; Emanuel
Capablanca; Jose Raul
Anderssen; Adolf
1500
1850
1858
1866
1875
1883
1891
1899
1907
1916
1924 1932
Year
1940
1949
1957
1965
1973
1981
1990
1998
2006
Figure 4: Skill evolution of top Chess players with TTT-D; see text for details.
tackled. Full approximate inference takes a mere 20 minutes in our F# implementation and
thus demonstrates the efficiency of EP in appropriately structured factor graphs.
One of the key questions provoked by this work concerns the comparability of skill estimates
across different eras of chess history. Can we directly compare Fischer?s rating in 1972 with
Kasparov?s in 1991? Edwards [2] points out that we would not be able to detect any skill
improvement if two players of equal skill were to learn about a skill-improving breakthrough
in chess theory at the same time but would only play against each other. However, this
argument does not rule out the possibility that with more players and chess knowledge
flowing less perfectly the improvement may be detectable. After all, we do see a marked
improvement in the average skill of the top players.
In future work, we would like to address the issue of skill calibration across years further,
e.g., by introducing a latent variable for each year that serves as the prior for new players
joining the pool. Also, it would be interesting to model the effect of playing white rather
than black.
References
[1] H. A. David. The method of paired comparisons. Oxford University Press, New York, 1988.
[2] R. Edwards. Edo historical chess ratings. http://members.shaw.ca/edo1/.
[3] A. E. Elo. The rating of chess players: Past and present. Arco Publishing, New York, 1978.
[4] M. E. Glickman. A comprehensive guide to chess ratings. Amer. Chess Journal, 3:59?102, 1995.
[5] M. E. Glickman. Parameter estimation in large dynamic paired comparison experiments. Applied
Statistics, 48:377?394, 1999.
[6] R. Herbrich, T. Minka, and T. Graepel. TrueSkill(TM): A Bayesian skill rating system. In
Advances in Neural Information Processing Systems 20, 2007.
[7] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
[8] J. Sonas. Chessmetrics. http://db.chessmetrics.com/.
8
| 3331 |@word pw:2 stronger:2 proportion:1 seems:1 crucially:1 propagate:2 thoreg:1 solid:2 carry:1 initial:1 necessity:1 series:5 score:1 denoting:1 interestingly:1 past:11 trueskill:27 current:3 com:5 bradley:2 comparing:1 si:3 pioneer:1 subsequent:1 plot:2 designed:1 update:9 initialises:1 selected:1 beginning:1 provides:3 node:1 rhone:1 herbrich:2 five:1 provisional:1 admission:1 along:2 constructed:1 skilled:1 combine:1 fitting:2 introduce:1 manner:2 expected:2 nor:2 little:1 cpu:1 estimating:2 glickman:4 underlying:1 moreover:2 developed:2 unified:1 every:3 ti:9 demonstrates:1 uk:4 imag:1 louis:1 before:3 struggle:1 consequence:1 era:1 joining:1 oxford:1 optimised:1 fluctuation:1 approximately:1 inria:1 tournament:1 black:2 co:1 statistically:2 unique:2 procedure:1 significantly:1 thought:1 got:1 cannot:1 kasparov:4 stj:1 live:1 influence:1 www:1 maximizing:1 go:2 alpes:1 factorisation:1 insight:1 estimator:1 rule:1 his:5 ralf:1 century:2 traditionally:1 variation:1 updated:1 pt:6 suppose:1 play:1 losing:1 particularly:1 predicts:1 database:5 ep:4 observed:1 initializing:1 calculate:1 revisiting:1 contemporary:2 removed:1 mentioned:1 vanishes:1 ui:1 dynamic:6 predictive:1 serve:1 efficiency:1 easily:1 represented:3 distinct:1 describe:1 effective:1 outcome:23 birth:1 whose:3 larger:2 federation:1 say:3 ability:7 statistic:1 fischer:7 online:1 fr:1 relevant:1 achieve:2 participating:2 achievement:2 convergence:4 double:1 extending:1 boris:3 converges:2 volatility:1 coupling:1 develop:1 propagating:1 ij:6 noticeable:1 strong:4 dividing:1 edward:4 indicate:3 come:1 drawback:1 correct:1 annotated:1 filter:4 virtual:2 explains:1 require:2 yij:10 exploring:1 pl:2 around:1 considered:2 elo:12 major:1 narrative:1 estimation:2 applicable:1 him:1 largest:3 champion:1 mit:1 gaussian:11 always:2 aim:1 rather:1 avoid:1 varying:1 improvement:3 modelling:1 indicates:1 contrast:2 pentium:1 detect:1 raul:1 inference:13 i0:1 entire:1 france:1 upward:2 overall:4 translational:1 html:1 issue:1 priori:1 development:2 smoothing:3 constrained:1 breakthrough:1 field:1 equal:1 saving:1 having:1 identical:1 future:7 grenoble:1 modern:2 comprehensive:1 individual:5 microsoft:7 william:1 linearised:1 interest:1 message:11 huge:2 possibility:1 light:2 tj:3 accurate:1 integral:1 glimpse:1 bobby:2 allocates:1 old:1 re:1 plotted:2 weighs:1 fitted:1 increased:1 kij:1 earlier:1 cover:1 disadvantage:1 arpad:2 introducing:1 xbox:1 too:1 stored:1 dependency:1 varies:1 considerably:2 contender:1 st:30 density:3 probabilistic:2 discipline:2 anatoly:1 together:1 quickly:1 pool:1 again:5 reflect:1 recorded:1 thesis:1 provoked:1 positivity:1 return:1 account:3 de:1 explicitly:4 ranking:1 depends:1 later:3 view:1 performed:2 picked:1 linked:1 competitive:1 start:1 ttt:24 publicly:1 variance:4 efficiently:1 clarifies:1 bayesian:8 mere:1 history:4 strongest:3 suffers:1 edo:3 against:3 frequency:1 minka:4 james:1 di:1 propagated:1 gain:1 emanuel:1 knowledge:3 improves:1 infers:1 graepel:2 schedule:1 back:1 tom:1 flowing:1 formulation:1 evaluated:1 done:1 strongly:1 shrink:1 lifetime:1 furthermore:1 amer:1 until:3 gaming:1 propagation:2 indicated:1 grows:2 thore:1 effect:4 true:1 evolution:4 former:1 hence:1 i2:2 deal:1 white:1 game:37 during:1 won:2 criterion:1 complete:1 upwards:1 crown:1 matchmaking:2 million:5 extend:5 discussed:1 he:6 marginals:1 cambridge:3 vanilla:8 similarly:1 centre:1 dj:1 reliability:2 dot:1 calibration:1 add:1 posterior:2 perspective:1 life:1 additional:1 period:3 dashed:1 full:3 multiple:1 infer:5 smooth:1 match:6 defeated:1 equally:2 paired:3 basic:1 himself:1 expectation:2 represent:1 wellfounded:1 addition:2 whereas:1 addressed:2 decreased:1 grow:1 appropriately:1 probably:1 sure:1 pass:1 markedly:1 db:1 member:1 anand:2 incorporates:1 counting:1 backwards:1 rherb:1 fit:1 competing:3 perfectly:1 idea:2 tm:1 rod:1 gb:2 ltd:3 ul:1 retrospective:1 inactivity:2 passing:3 york:2 repeatedly:3 clear:1 http:4 dotted:1 track:2 per:3 dropping:1 affected:1 group:1 key:2 four:1 threshold:1 drawn:2 neither:2 pj:1 backward:6 graph:10 year:29 enforced:1 sti:18 jose:1 uncertainty:5 master:1 dst:2 arrive:1 family:1 draw:34 fide:2 played:3 display:1 tackled:1 fascinating:2 encountered:1 strength:1 precisely:1 argument:1 attempting:1 performing:1 structured:2 according:3 viswanathan:2 across:6 pti:1 making:1 s1:5 chess:31 glicko:2 equation:3 remains:1 turn:1 detectable:1 committee:1 end:3 serf:2 adopted:2 available:2 pierre:2 shaw:1 original:1 denotes:2 assumes:1 include:1 running:1 top:4 remaining:1 publishing:1 lttt:2 build:2 uj:1 leakage:1 added:2 pride:1 occurs:1 question:1 chairman:1 comparability:1 win:9 entity:2 considers:1 collected:2 trivial:1 assuming:1 code:1 index:1 providing:1 thoughtful:1 vladimir:2 cij:17 robert:1 rise:1 stw:2 implementation:2 unknown:3 perform:2 allowing:1 observation:1 beat:1 defining:1 extended:1 variability:1 team:4 precise:1 looking:2 ever:1 smoothed:1 community:1 drift:2 inferred:1 rating:26 introduced:1 evidenced:1 pair:1 david:1 address:2 able:1 proceeds:1 beating:1 built:1 belief:5 terry:2 power:1 greatest:1 force:3 carried:1 coupled:1 naive:1 text:2 review:1 prior:5 garry:2 fully:2 loss:1 interesting:1 filtering:4 versus:1 age:1 foundation:1 degree:1 pij:2 playing:3 pi:1 factorising:2 supported:3 last:2 guide:1 fall:1 taking:2 correspondingly:1 mikhail:1 benefit:1 curve:2 calculated:1 world:4 cumulative:1 arco:1 forward:6 concretely:1 made:1 collection:1 historical:10 founded:1 constituting:1 approximate:9 skill:77 active:1 assumed:5 latent:2 ptij:15 why:1 learn:1 robust:1 ca:1 career:1 improving:2 s2:6 whole:1 paul:2 repeated:1 positively:1 tl:1 deployed:1 winning:2 tied:1 minute:3 specific:8 evidence:5 stl:2 concern:1 effectively:2 phd:1 downward:2 margin:18 aspx:1 mf:8 sport:1 determines:1 goal:1 marked:1 towards:1 determined:1 specifically:1 operates:1 except:2 characterised:1 pas:3 invariance:2 experimental:1 disregard:1 player:79 indicating:2 mark:1 incorporate:1 correlated:1 |
2,572 | 3,332 | Learning and using relational theories
Charles Kemp, Noah D. Goodman & Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139
{ckemp,ndg,jbt}@mit.edu
Abstract
Much of human knowledge is organized into sophisticated systems that are often
called intuitive theories. We propose that intuitive theories are mentally represented in a logical language, and that the subjective complexity of a theory is
determined by the length of its representation in this language. This complexity
measure helps to explain how theories are learned from relational data, and how
they support inductive inferences about unobserved relations. We describe two
experiments that test our approach, and show that it provides a better account of
human learning and reasoning than an approach developed by Goodman [1].
What is a theory, and what makes one theory better than another? Questions like these are of obvious
interest to philosophers of science but are also discussed by psychologists, who have argued that
everyday knowledge is organized into rich and complex systems that are similar in many respects
to scienti?c theories. Even young children, for instance, have systematic beliefs about domains
including folk physics, folk biology, and folk psychology [2]. Intuitive theories like these play many
of the same roles as scienti?c theories: in particular, both kinds of theories are used to explain and
encode observations of the world, and to predict future observations.
This paper explores the nature, use and acquisition of simple theories. Consider, for instance, an
anthropologist who has just begun to study the social structure of a remote tribe, and observes that
certain words are used to indicate relationships between selected pairs of individuals. Suppose that
term T1(?, ?) can be glossed as ancestor(?, ?), and that T2(?, ?) can be glossed as friend(?, ?). The
anthropologist might discover that the ?rst term is transitive, and that the second term is symmetric
with a few exceptions. Suppose that term T3(?, ?) can be glossed as defers to(?, ?), and that the tribe
divides into two castes such that members of the second caste defer to members of the ?rst caste. In
this case the anthropologist might discover two latent concepts (caste 1(?) and caste 2(?)) along
with the relationship between these concepts.
As these examples suggest, a theory can be de?ned as a system of laws and concepts that specify
the relationships between the elements in some domain [2]. We will consider how these theories are
learned, how they are used to encode relational data, and how they support predictions about unobserved relations. Our approach to all three problems relies on the notion of subjective complexity.
We propose that theory learners prefer simple theories, that people remember relational data in terms
of the simplest underlying theory, and that people extend a partially observed data set according to
the simplest theory that is consistent with their observations. There is no guarantee that a single
measure of subjective complexity can do all of the work that we require [3]. This paper, however,
explores the strong hypothesis that a single measure will suf?ce.
Our formal treatment of subjective complexity begins with the question of how theories are mentally
represented. We suggest that theories are represented in some logical language, and propose a speci?c ?rst-order language that serves as a hypothesis about the ?language of thought.? We then pursue
the idea that the subjective complexity of a theory corresponds to the length of its representation in
this language. Our approach therefore builds on the work of Feldman [4], and is related to other
psychological applications of the notion of Kolmogorov complexity [5]. The complexity measure
we describe can be used to de?ne a probability distribution over a space of theories, and we develop
a model of theory acquisition by using this distribution as the prior for a Bayesian learner. We also
1
(a) Star
11
(b) Bipartite
(c) Exception
22 33 44 55 66 77 88
16 26 36 46 56
21 31 41 51 61 71 81
17 27 37 47 57
18 28 38 48 58
R(X, X).
T(6). T(7). T(8).
R(X, Y) ? ?
T(X), T(Y).
R(X, 1).
(d) Symmetric
(e) Transitive
11 22 33 44 55 66 77
13 31
12 21
24 42
56 65
11
26 36 46 56
17 27 37 47 57
18 28 38 48 58
T(6). T(7). T(8).
R(X, Y) ? ?
T(X), T(Y).
?(1, 6).
R(1, 1). R
(f) Random
12
21
13 23
14 24 34
13 32
14 24 34
15 25 35 45
16 26 36 46 56
51 52 35 54
61 26 63 46 56
R(1, 2). R(1, 3). R(2, 4). R(5, 6).
R(1, 2). R(2, 3). R(3, 4).
R(5, X). R(X, 4).
R(X, Y) ? R(Y, X).
R(X, X).
R(4, 5). R(5, 6).
R(X, Z) ? R(X, Y), R(Y, Z).
R(2, 1). R(1, 3). R(6, 1). R(3, 2).
R(2, 6). R(3, 5). R(6, 3). R(4, 6).
?(X, X). R
?(6, 4). R
?(5, 3).
R
Figure 1: Six possible extensions for a binary predicate R(?, ?). In each case, the objects in the
domain are represented as digits, and a pair such as 16 indicates that R(1, 6) is true. Below each set
of pairs, the simplest theory according to our complexity measure is shown.
show how the same Bayesian approach helps to explain how theories support inductive generalization: given a set of observations, future observations (e.g. whether one individual defers to another)
can be predicted using the posterior distribution over the space of theories.
We test our approach by developing two experiments where people learn and make predictions
about binary and ternary relations. As far as we know, the approach of Goodman [1] is the only
other measure of theory complexity that has previously been tested as a psychological model [6].
We show that our experiments support our approach and raise challenges for this alternative model.
1
Theory complexity: a representation length approach
Intuitive theories correspond to mental representations of some sort, and our ?rst task is to characterize the elements used to build these representations. We explore the idea that a theory is a
system of statements in a logical language, and six examples are shown in Fig. 1. The theory in
Fig. 1b is related to the defers to(?, ?) example already described. Here we are interested in a
domain including 9 elements, and a two place predicate R(?, ?) that is true of all and only the 15
pairs shown. R is de?ned using a unary predicate T which is true of only three elements: 6, 7, and
8. The theory includes a clause which states that R(X, Y) is true for all pairs XY such that T(X) is
false and T(Y) is true. The theory in Fig. 1c is very similar, but includes an additional clause which
speci?es that R(1, 1) is true, and an exception which speci?es that R(1, 6) is false. Formally, each
theory we consider is a collection of function-free de?nite clauses. All variables are universally
quanti?ed: for instance, the clause R(X, Z) ? R(X, Y), R(Y, Z) is equivalent to the logical formula
?x ?y ?z (R(x, z) ? R(x, y) ? R(y, z)). For readability, the theories in Fig. 1 include parentheses and arrows, but note that these symbols are unnecessary and can be removed. Our proposed
language includes only predicate symbols, variable symbols, constant symbols, and a period that
indicates when one clause ?nishes and another begins.
Each theory in Fig. 1 speci?es the extension of one or more predicates. The extension of predicate
P is de?ned in terms of predicate P+ (which captures the basic rules that lead to membership in P)
and predicate P? (which captures exceptions to these rules). The resulting extension of P is de?ned
2
as P+ \ P? , or the set difference of P+ and P? .1 Once P has been de?ned, later clauses in the
theory may refer to P or its negation ?
P. To ensure that our semantics is well-de?ned, the predicates
in any valid theory must permit an ordering so that the de?nition of any predicate does not refer to
predicates that follow it in the order. Formally, the de?nition of each predicate P+ or P? can refer
only to itself (recursive de?nitions are allowed) and to any predicate M or ?
M where M < P.
Once we have committed to a speci?c language, the subjective complexity of a theory is assumed to
correspond to the number of symbols in its representation. We have chosen a language where there
is one symbol for each position in a theory where a predicate, variable or constant appears, and one
symbol to indicate when each clause ends. Given this language, the subjective complexity c(T ) of
theory T is equal to the sum of the number of clauses in the theory and the number of positions in
the theory where a predicate, variable or constant appears:
c(T ) = #clauses(T ) + #pred slots(T ) + #var slots(T ) + #const slots(T ). (1)
For instance, the clause R(X, Z) ? R(X, Y), R(Y, Z). contributes ten symbols towards the complexity
of a theory (three predicate symbols, six variable symbols, and one period). Other languages might
be considered: for instance, we could use a language which uses ?ve symbols (e.g. ?ve bits) to
represent each predicate, variable and constant, and one symbol (e.g. one bit) to indicate the end of
a clause. Our approach to subjective complexity depends critically on the representation language,
but once a language has been chosen the complexity measure is uniquely speci?ed.
Although our approach is closely related to the notion of Kolmogorov complexity and to Minimum
Message Length (MML) and Minimum Description Length (MDL) approaches, we refer to it as a
Representation Length (RL) approach. A RL approach includes a commitment to a speci?c language
that is proposed as a psychological hypothesis, but these other approaches aspire towards results that
do not depend on the language chosen.2 It is sometimes suggested that the notion of Kolmogorov
complexity provides a more suitable framework for psychological research than the RL approach,
precisely because it allows for results that do not depend on a speci?c description language [8]. We
subscribe to the opposite view. Mental representations presumably rely on some particular language,
and identifying this language is a central challenge for psychological research.
The language we described should be considered as a tentative approximation of the language of
thought. Other languages can and should be explored, but our language has several appealing properties. Feldman [4] has argued that de?nite clauses are psychologically natural, and working with
these representations allows our approach to account for several classic results from the concept
learning literature. For instance, our language leads to the prediction that conjunctive concepts are
easier to learn than disjunctive concepts [9].3 Working with de?nite clauses also ensures that each of
our theories has a unique minimal model, which means that the extension of a theory can be de?ned
in a particularly simple way. Finally, human learners deal gracefully with noise and exceptions, and
our language provides a simple way to handle exceptions.
Any concrete proposal about the language of thought should make predictions about memory, learning and reasoning. Suppose that data set D lists the extensions of one or more predicates, and that a
theory is a ?candidate theory? for D if it correctly de?nes the extensions of all predicates in D. Note
that a candidate theory may well include latent predicates?predicates that do not appear in D, but
are useful for de?ning the predicates that have been observed. We will assume that humans encode
D in terms of the simplest candidate theory for D, and that the dif?culty of memorizing D is determined by the subjective complexity of this theory. Our approach can and should be tested against
classic results from the memory literature. Unlike some other approaches to complexity [10], for
instance, our model predicts that a sequence of k items is about equally easy to remember regardless
of whether the items are drawn from a set of size 2, a set of size 10, or a set of size 1000 [11].
1
The extension of P+ is the smallest set that satis?es all of the clauses that de?ne P+ , and the extension of
P is de?ned similarly. To simplify our notation, Fig. 1 uses P to refer to both P and P+ , and ?
P to refer to ?
P and
P? . Any instance of P that appears in a clause de?ning P is really an instance of P+ , and any instance of ?
P that
appears in a clause de?ning ?
P is really an instance of P? .
2
MDL approaches also commit to a speci?c language, but this language is often intended to be as general
as possible. See, for instance, the discussion of universal codes in Gr?unwald et al. [7].
3
A conjunctive concept C(?) can be de?ned using a single clause: C(X) ? A(X), B(X). The shortest de?nition
of a disjunctive concept requires two clauses: D(X) ? A(X). D(X) ? B(X).
?
3
To develop a model of inductive learning and reasoning, we take a Bayesian approach, and use
our complexity measure to de?ne a prior distribution over a hypothesis space of theories: P (T ) ?
2?c(T ) .4 Given this prior distribution, we can use Bayesian inference to make predictions about
unobserved relations and to discover the theory T that best accounts for the observations in data set
D [12, 13]. Suppose that we have a likelihood function P (D|T ) which speci?es how the examples
in D were generated from some underlying theory T . The best explanation for the data D is the
theory that maximizes the posterior distribution P (T |D) ? P (D|T )P (T ). If we need to predict
whether ground term g is likely to be true, 5 we can sum over the space of theories:
P (g|D) =
X
P (g|T )P (T |D) =
X
1
P (D|T )P (T )
P (D)
(2)
T :g?T
T
where the ?nal sum is over all theories T that make ground term g true.
1.1
Related work
The theories we consider are closely related to logic programs, and methods for Inductive Logic
Programming (ILP) explore how these programs can be learned from examples [14]. ILP algorithms
are often inspired by the idea of searching for the shortest theory that accounts for the available data,
and ILP is occasionally cast as the problem of minimizing an explicit MDL criterion [10]. Although
ILP algorithms are rarely considered as cognitive models, the RL approach has a long psychological
history, and is proposed by Chomsky [15] and Leeuwenberg [16] among others.
Formal measures of complexity have been developed in many ?elds [17], and there is at least one
other psychological account of theory complexity. Goodman [1] developed a complexity measure
that was originally a philosophical proposal about scienti?c theories, but was later tested as a model
of subjective complexity [6]. A detailed description of this measure is not possible here, but we
attempt to give a ?avor of the approach. Suppose that a basis is a set of predicates. The starting
point for Goodman?s model is the intuition that basis B1 is at least as complex as basis B2 if B1
can be used to de?ne B2. Goodman argues that this intuition is ?awed, but his model is founded
on a re?nement of this intuition. For instance, since the binary predicate in Fig. 1b can be de?ned
in terms of two unary predicates, Goodman?s approach requires that the complexity of the binary
predicate is no more than the sum of the complexities of the two unary predicates.
We will use Goodman?s model as a baseline for evaluating our own approach, and a comparison
between these two models should be informed by both theoretical and empirical considerations.
On the theoretical side, our approach relies on a simple principle for deciding which structural
properties are relevant to the measurement of complexity: the relevant properties are those with
short logical representations. Goodman?s approach incorporates no such principle, and he proposes
somewhat arbitrarily that re?exivity and symmetry are among the relevant structural properties but
that transitivity is not. A second reason for preferring our model is that it makes contact with a
general principle?the idea that simplicity is related to representation length?that has found many
applications across psychology, machine learning, and philosophy.
2
Experimental results
We designed two experiments to explore settings where people learn, remember, and make inductive
inferences about relational data. Although theories often consist of systems of many interlocking
relations, we keep our experiments simple by asking subjects to learn and reason about a single
relation at a time. Despite this restriction, our experiments still make contact with several issues
raised by systems of relations. As the defers to(?, ?) example suggests, a single relation may be
best explained as the observable tip of a system involving several latent predicates (e.g. caste 1(?)
and caste 2(?)).
4
To ensure that this distribution can be normalized, we assume that there is some upper bound on the number
of predicate symbols, variable symbols, and constants, and on the length of the theories we will consider. There
will therefore be a ?nite number of possible theories, and our prior will be a valid probability distribution.
5
A ground term is a term such as R(8, 9) that does not include any variables.
4
Complexity (RL)
Complexity (Human)
6
4
100
2
20
0
0
0
star
bprt
excp
sym
trans
rand
200
star
bprt
excp
sym
trans
rand
4
40
2
star
bprt
excp
sym
trans
rand
300
Complexity (Goodman)
0
star
bprt
excp
sym
trans
rand
Learning time
Figure 2: (a) Average time in seconds to learn the six sets in Fig. 1. (b) Average ratings of set complexity. (c) Complexity scores according to our representation length (RL) model. (d) Complexity
scores according to Goodman?s model.
2.1
Experiment 1: memory and induction
In our ?rst experiment, we studied the subjective complexity of six binary relations that display a
range of structural properties, including re?exivity, symmetry, and transitivity.
Materials and Methods. 18 adults participated in this experiment. Subjects were required to learn
the 6 sets shown in Fig. 1, and to make inductive inferences about each set. Although Fig. 1 shows
pairs of digits, the experiment used letter pairs, and the letters for each condition and the order
in which these conditions were presented were randomized across subjects. The pairs for each
condition were initially laid out randomly on screen, and subjects could drag them around and
organize them to help them understand the structure of the set. At any stage, subjects could enter a
test phase where they were asked to list the 15 pairs belonging to the current set. Subjects who made
an error on the test were returned to the learning phase. After 9 minutes had elapsed, subjects were
allowed to pass the test regardless of how many errors they made.
After passing the test, subjects were asked to rate the complexity of the set compared to other sets
with 15 pairs. Ratings were provided on a 7 point scale. Subjects were then asked to imagine that
a new letter (e.g. letter 9) had belonged to the current alphabet, and were given two inductive tasks.
First they were asked to enter between 1 and 10 novel pairs that they might have expected to see
(each novel pair was required to include the new letter). Next they were told about a novel pair that
belonged to the set (e.g. pair 91), and were again asked to enter up to 10 additional pairs that they
might have expected to see.
Results. The average time needed to learn each set is shown in Fig. 2a, and ratings of set complexity
are shown in Fig. 2b. It is encouraging that these measures yield converging results, but they may be
confounded since subjects rated the complexity of a set immediately after learning it. The complexities plotted in Fig. 2c are the complexities of the theories shown in Fig. 1, which we believe to be
the simplest theories according to our complexity measure. The ?nal plot in Fig. 2 shows complexities according to Goodman?s model, which assigns each binary relation an integer between 0 and
4. There are several differences between these models: for instance, Goodman?s account incorrectly
predicts that the exception case is the hardest of the six, but our model acknowledges that a simple theory remains simple if a handful of exceptions are added. Goodman?s account also predicts
that transitivity is not an important structural regularity, but our model correctly predicts that the
transitive set is simpler than the same set with some of the pairs reversed (the random set).
Results for the inductive task are shown in Fig. 3. The ?rst two columns show the number of subjects
who listed each novel pair. The remaining two columns show the probability of set membership
predicted by our model. To generate these predictions, we applied Equation 2 and summed over
a set of theories created by systematically extending the theories shown in Fig. 1. Each extended
theory includes up to one additional clause for each predicate in the base theory, and each additional
clause includes at most two predicate slots. For instance, each extended theory for the bipartite
case is created by choosing whether or not to add the clause T(9), and adding up to one clause for
predicate R.6 For the ?rst inductive task, the likelihood term P (D|T ) (see Equation 2) is set to 0
for all theories that are not consistent with the pairs observed during training, and to a constant for
all remaining theories. For the second task we assumed in addition that the novel pair observed is
6
R(9, X), ?
R(2, 9), and R(X, 9) ? R(X, 2) are three possible additions.
5
18
1
9
9
0.5
star
18
random
trans
symm
excep
bipart
0 91
99 19
0 91
89
99 19
89
18
1
9
9
0.5
99 19
0 91
89
99 19
89
18
1
9
9
0.5
0 91
99 19
0 91
89
99 19
89
18
1
9
9
0.5
0
81
88 18
0
78
81
88 18
78
0
18
18
1
9
9
0.5
0
0
0
71
77 17
67
71
77 17
67
18
18
1
9
9
0.5
0
71
77 17
67
Human (no examples)
0
71
77 17
67
Human (1 example)
0
r=0.96
89
0 91
1
99 19
89
r=0.98
99 19
89
99 19
89
r=0.99
0 91
r=0.99
1
0.5
0 91
18
99 19
0.5
0 91
18
r=0.99
1
0.5
0 91
18
0 91
r=0.99
99 19
89
r=0.88
0 91
99 19
89
r=0.99
1
0.5
81
88 18
r=0.62
78
0
81
88 18
78
71
77 17
67
1
r=0.93
0.5
71
77 17
67
r=0.38
0
r=0.74
1
0.5
71
77 17
67
RL (no examples)
0
71
77 17
67
RL (one example)
Figure 3: Data and model predictions for the induction task in Experiment 1. Columns 1 and 3
show predictions before any pairs involving the new letter are observed. Columns 2 and 4 show
predictions after a single novel pair (marked with a gray bar) is observed to belong to the set. The
model plots for each condition include correlations with the human data.
sampled at random from all pairs involving the new letter.7 All model predictions were computed
using Mace4 [18] to generate the extension of each theory considered.
The supporting material includes predictions for a model based on the Goodman complexity measure
and an exemplar model which assumes that the new letter will be just like one of the old letters.8 The
exemplar model outperforms our model in the random condition, and makes accurate predictions
about three other conditions. Overall, however, our model performs better than the two baselines.
Here we focus on two important predictions that are not well handled by the exemplar model. In
the symmetry condition, almost all subjects predict that 78 belongs to the set after learning that 87
belongs to the set, suggesting that they have learned an abstract rule. In the transitive condition,
most subjects predict that pairs 72 through 76 belong to the set after learning that 71 belongs to the
set. Our model accounts for this result, but the exemplar model has no basis for making predictions
about letter 7, since this letter is now known to be unlike any of the others.
2.2
Experiment 2: learning from positive examples
During the learning phase of our ?rst experiment, subjects learned a theory based on positive examples (the theory included all pairs they had seen) and negative examples (the theory ruled out all
pairs they had not seen). Often, however, humans learn theories based on positive examples alone.
Suppose, for instance, that our anthropologist has spent only a few hours with a new tribe. She may
have observed several pairs who are obviously friends, but should realize that many other pairs of
friends have not yet interacted in her presence.
7
For the second task, P (D|T ) is set to 0 for theories that are inconsistent with the training pairs and theories
which do not include the observed novel pair. For all remaining theories, P (D|T ) is set to n1 , where n is the
total number of novel pairs that are consistent with T .
8
Supporting material is available at www.charleskemp.com
6
c)
7
1
R(X, X, Y).
221 443
552 663
d)
7
1
R(X, Y, Z).
231 456
615 344
e)
7
1
0
?10
?10
?5
?0.1
?10
?20
?20
?10
?0.2
?20
777
771
778
789
237
777
771
778
789
237
0
777
771
778
789
237
0
777
771
778
789
237
0
RL
0
R(2, 3, X).
231 234
235 236
777
771
778
789
237
1
221 331
441 551
777
771
778
789
237
7
R(X, X, 1).
777
771
778
789
237
b)
777
771
778
789
237
1
111 222
333 444
777
771
778
789
237
7
R(X, X, X).
777
771
778
789
237
Human
a)
Figure 4: Data and model predictions for Experiment 2. The four triples observed for each set are
shown at the top of the ?gure. The ?rst row of plots shows average ratings on a scale from 1 (very
unlikely to belong to the set) to 7 (very likely). Model predictions are plotted as log probabilities.
Our framework can handle cases like these if we assume that the data D in Equation 2 are sampled
from the ground terms that are true according to the underlying theory. We follow [10] and [13]
and use a distribution P (D|T ) which assumes that the examples in D are randomly sampled with
replacement from the ground terms that are true. This sampling assumption encourages our model
to identify the theory with the smallest extension that is compatible with all of the training examples.
We tested this approach by designing an experiment where learners were given sets of examples that
were compatible with several underlying theories.
Materials and Methods. 15 adults participated in this experiment immediately after taking Experiment 1. In each of ?ve conditions, subjects were told about a set of triples built from an alphabet of
9 letters. They were shown four triples that belonged to the set (Fig. 4), and told that the set might
include triples that they had not seen. Subjects then gave ratings on a seven point scale to indicate
whether ?ve additional triples (see Fig. 4) were likely to belong to the set.
Results. Average ratings and model predictions are shown in Fig. 4. Model predictions for each
condition were computed using Equation 2 and summing over a space of theories that included the
?ve theories shown at the top of Fig. 4, variants of these ?ve theories which stated that certain pairs
of slots could not be occupied by the same constant,9 and theories that included no variables but
merely enumerated up to 5 triples.10
Although there are general theories like R(X, Y, Z) that are compatible with the triples observed in all
?ve conditions, Fig. 4 shows that people were sensitive to different regularities in each case.11 We
focus on one condition (Fig. 4b) that exposes the strengths and weaknesses of our model. According
to our model, the two most probable theories given the triples for this condition are R(X, X, 1) and the
closely related variant that rules out R(1, 1, 1). The next most probable theory is R(X, X, Y). These
predictions are consistent with people?s judgments that 771 is very likely to belong to the set, and
that 778 is the next most likely option. Unlike our model, however, people consider 777 to be
substantially less likely than 778 to belong to the set. This result may suggest that the variant of
R(X, X, Y) that rules out R(X, X, X) deserves a higher prior probability than our model recognizes. To
better account for cases like this, it may be worth considering languages where any two variables
that belong to the same clause but have different names must refer to different entities.
3
Discussion and Conclusion
There are many psychological models of concept learning [4, 12, 13], but few that use representations rich enough to capture the content of intuitive theories. We suggested that intuitive theories
are mentally represented in a ?rst-order logical language, and proposed a speci?c hypothesis about
9
?(X, X, X).
One such theory includes two clauses: R(X, X, Y). R
One such theory is the following list of clauses: R(2, 2, 1). R(3, 3, 1). R(4, 4, 1). R(5, 5, 1). R(7, 7, 7).
11
Similar results have been found with 9-month old infants. Cases like Figs. 4b and 4c have been tested in an
infant language-learning study where the stimuli were three-syllable strings [19]. 9-month old infants exposed
to strings like the four in Fig. 4c generalized to other strings consistent with the theory R(X, X, Y), but infants in
the condition corresponding to Fig. 4b generalized only to strings consistent with the theory R(X, X, 1).
10
7
this ?language of thought.? We assumed that the subjective complexity of a theory depends on the
length of its representation in this language, and described experiments which suggest that the resulting complexity measure helps to explain how theories are learned and used for inductive inference.
Our experiments deliberately used stimuli that minimize the in?uence of prior knowledge. Theories,
however, are cumulative, and the theory that seems simplest to a learner will often depend on her
background knowledge. Our approach provides a natural place for background knowledge to be
inserted. A learner can be supplied with a stock of background predicates, and the shortest representation for a data set will depend on which background predicates are available. Since different
sets of predicates will lead to different predictions about subjective complexity, empirical results can
help to determine the background knowledge that people bring to a given class of problems.
Future work should aim to re?ne the representation language and complexity measure we proposed.
We expect that something like our approach will be suitable for modeling a broad class of intuitive
theories, but the speci?c framework presented here can almost certainly be improved. Future work
should also consider different strategies for searching the space of theories. Some of the strategies developed in the ILP literature should be relevant [14], but a detailed investigation of search
algorithms seems premature until our approach has held up to additional empirical tests. It is comparatively easy to establish whether the theories that are simple according to our approach are also
considered simple by people, and our experiments have made a start in this direction. It is much
harder to establish that our approach captures most of the theories that are subjectively simple, and
more exhaustive experiments are needed before this conclusion can be drawn.
Boolean concept learning has been studied for more than ?fty years [4, 9], and many psychologists
have made empirical and theoretical contributions to this ?eld. An even greater effort will be needed
to crack the problem of theory learning, since the space of intuitive theories is much richer than
the space of Boolean concepts. The dif?culty of this problem should not be underestimated, but
computational approaches can contribute part of the solution.
Acknowledgments Supported by the William Asbjornsen Albert memorial fellowship (CK), the James S. McDonnell Foundation Causal Learning Collaborative Initiative (NDG, JBT) and the Paul E. Newton chair (JBT).
References
[1] N. Goodman. The structure of appearance. 2nd edition, 1961.
[2] S. Carey. Conceptual change in childhood. MIT Press, Cambridge, MA, 1985.
[3] H. A. Simon. Complexity and the representation of patterned sequences of symbols. Psychological
Review, 79:369?382, 1972.
[4] J. Feldman. An algebra of human concept learning. JMP, 50:339?368, 2006.
[5] N. Chater and P. Vitanyi. Simplicity: a unifying principle in cognitive science. TICS, 7:19?22, 2003.
[6] J. T. Krueger. A theory of structural simplicity and its relevance to aspects of memory, perception, and
conceptual naturalness. PhD thesis, University of Pennsylvania, 1979.
[7] P. Gr?unwald, I. J. Myung, and M. Pitt, editors. Advances in Minimum Description Length: Theory and
Applications. 2005.
[8] N. Chater. Reconciling simplicity and likelihood principles in perceptual organization. Psychological
Review, 103:566?581, 1996.
[9] J. A. Bruner, J. S. Goodnow, and G. J. Austin. A study of thinking. Wiley, 1956.
[10] D. Conklin and I. H. Witten. Complexity-based induction. Machine Learning, 16(3):203?225, 1994.
[11] G. A. Miller. The magical number seven, plus or minus two: Some limits on our capacity for processing
information. Psychological Review, 63(1):81?97, 1956.
[12] N. D. Goodman, T. L. Grif?ths, J. Feldman, and J. B. Tenenbaum. A rational analysis of rule-based
concept learning. In CogSci, 2007.
[13] J. B. Tenenbaum and T. L. Grif?ths. Generalization, similarity, and Bayesian inference. BBS, 24:629?641,
2001.
[14] S. Muggleton and L. De Raedt. Inductive logic programming: theory and methods. Journal of Logic
Programming, 19-20:629?679, 1994.
[15] N. Chomsky. The logical structure of linguistic theory. University of Chicago Press, Chicago, 1975.
[16] E. L. J. Leeuwenberg. A perceptual coding language for visual and auditory patterns. American Journal
of Psychology, 84(3):307?349, 1971.
[17] B. Edmonds. Syntactic measures of complexity. PhD thesis, University of Manchester, 1999.
[18] W. McCune. Mace4 reference manual and guide. Technical Report ANL/MCS-TM-264, Argonne National Laboratory, 2003.
[19] L. Gerken. Decisions, decisions: infant language learning when multiple generalizations are possible.
Cognition, 98(3):67?74, 2006.
8
| 3332 |@word seems:2 nd:1 eld:2 minus:1 harder:1 score:2 subjective:13 outperforms:1 current:2 com:1 yet:1 conjunctive:2 must:2 realize:1 chicago:2 designed:1 plot:3 alone:1 infant:5 selected:1 item:2 short:1 gure:1 mental:2 provides:4 contribute:1 readability:1 simpler:1 along:1 initiative:1 expected:2 brain:1 inspired:1 anthropologist:4 encouraging:1 considering:1 begin:2 discover:3 underlying:4 notation:1 maximizes:1 provided:1 what:2 tic:1 kind:1 jmp:1 pursue:1 substantially:1 string:4 developed:4 informed:1 unobserved:3 guarantee:1 remember:3 appear:1 organize:1 t1:1 before:2 positive:3 limit:1 despite:1 might:6 plus:1 studied:2 drag:1 suggests:1 conklin:1 dif:2 patterned:1 range:1 unique:1 acknowledgment:1 ternary:1 recursive:1 digit:2 nite:4 empirical:4 universal:1 thought:4 word:1 suggest:4 chomsky:2 restriction:1 equivalent:1 interlocking:1 www:1 regardless:2 starting:1 simplicity:4 identifying:1 immediately:2 assigns:1 rule:6 his:1 classic:2 handle:2 notion:4 searching:2 imagine:1 play:1 suppose:6 programming:3 us:2 designing:1 hypothesis:5 element:4 particularly:1 nitions:1 predicts:4 observed:10 role:1 disjunctive:2 inserted:1 capture:4 childhood:1 ensures:1 remote:1 ordering:1 removed:1 observes:1 intuition:3 complexity:50 asked:5 raise:1 depend:4 algebra:1 exposed:1 bipartite:2 learner:6 basis:4 stock:1 represented:5 kolmogorov:3 alphabet:2 describe:2 cogsci:1 choosing:1 exhaustive:1 richer:1 defers:4 commit:1 syntactic:1 itself:1 obviously:1 sequence:2 propose:3 commitment:1 relevant:4 culty:2 intuitive:8 description:4 everyday:1 rst:10 interacted:1 regularity:2 manchester:1 extending:1 object:1 help:5 glossed:3 friend:3 develop:2 spent:1 exemplar:4 strong:1 predicted:2 indicate:4 direction:1 ning:3 closely:3 human:11 material:4 argued:2 require:1 generalization:3 really:2 investigation:1 probable:2 enumerated:1 extension:11 around:1 considered:5 ground:5 deciding:1 presumably:1 cognition:1 predict:4 pitt:1 smallest:2 expose:1 sensitive:1 mit:3 aim:1 ck:1 occupied:1 chater:2 linguistic:1 encode:3 focus:2 philosopher:1 she:1 indicates:2 likelihood:3 baseline:2 inference:6 membership:2 unary:3 unlikely:1 initially:1 her:2 relation:10 ancestor:1 interested:1 semantics:1 issue:1 overall:1 among:2 bruner:1 proposes:1 raised:1 summed:1 equal:1 once:3 sampling:1 biology:1 broad:1 hardest:1 thinking:1 future:4 t2:1 others:2 simplify:1 stimulus:2 few:3 report:1 randomly:2 ve:7 national:1 individual:2 intended:1 phase:3 replacement:1 n1:1 negation:1 attempt:1 william:1 organization:1 interest:1 message:1 satis:1 certainly:1 mdl:3 weakness:1 grif:2 ndg:2 scienti:3 held:1 accurate:1 xy:1 folk:3 divide:1 old:3 re:4 plotted:2 causal:1 ruled:1 uence:1 theoretical:3 minimal:1 psychological:11 instance:16 column:4 modeling:1 boolean:2 asking:1 raedt:1 deserves:1 predicate:35 gr:2 characterize:1 explores:2 randomized:1 subscribe:1 preferring:1 systematic:1 physic:1 told:3 tip:1 concrete:1 again:1 central:1 thesis:2 cognitive:3 american:1 account:9 suggesting:1 de:26 star:6 b2:2 coding:1 includes:8 goodnow:1 depends:2 later:2 view:1 start:1 sort:1 option:1 defer:1 simon:1 carey:1 contribution:1 minimize:1 collaborative:1 who:5 miller:1 t3:1 correspond:2 yield:1 identify:1 judgment:1 bayesian:5 critically:1 mc:1 worth:1 history:1 explain:4 manual:1 ed:2 against:1 acquisition:2 james:1 obvious:1 sampled:3 rational:1 auditory:1 treatment:1 begun:1 logical:7 knowledge:6 organized:2 sophisticated:1 appears:4 originally:1 higher:1 follow:2 specify:1 improved:1 rand:4 just:2 stage:1 correlation:1 until:1 working:2 gray:1 believe:1 name:1 concept:13 true:10 normalized:1 deliberately:1 inductive:11 symmetric:2 laboratory:1 jbt:3 deal:1 transitivity:3 during:2 uniquely:1 encourages:1 criterion:1 generalized:2 argues:1 performs:1 bring:1 reasoning:3 consideration:1 novel:8 krueger:1 charles:1 mentally:3 witten:1 clause:25 rl:9 discussed:1 extend:1 he:1 belong:7 refer:7 measurement:1 cambridge:2 feldman:4 enter:3 similarly:1 language:37 had:5 similarity:1 subjectively:1 base:1 add:1 something:1 posterior:2 own:1 belongs:3 occasionally:1 certain:2 binary:6 arbitrarily:1 joshua:1 nition:3 seen:3 minimum:3 additional:6 somewhat:1 greater:1 speci:12 determine:1 shortest:3 period:2 multiple:1 memorial:1 technical:1 muggleton:1 long:1 equally:1 parenthesis:1 prediction:20 involving:3 basic:1 converging:1 variant:3 aspire:1 symm:1 psychologically:1 represent:1 sometimes:1 albert:1 proposal:2 addition:2 background:5 participated:2 fellowship:1 underestimated:1 goodman:17 unlike:3 subject:16 member:2 incorporates:1 inconsistent:1 integer:1 structural:5 presence:1 easy:2 enough:1 psychology:3 gave:1 pennsylvania:1 opposite:1 idea:4 tm:1 nishes:1 whether:6 six:6 handled:1 effort:1 nement:1 returned:1 passing:1 useful:1 detailed:2 listed:1 tenenbaum:3 ten:1 simplest:6 generate:2 supplied:1 crack:1 correctly:2 edmonds:1 four:3 drawn:2 ce:1 nal:2 merely:1 sum:4 year:1 letter:12 place:2 laid:1 almost:2 decision:2 prefer:1 bit:2 bound:1 syllable:1 display:1 vitanyi:1 strength:1 noah:1 precisely:1 handful:1 fty:1 aspect:1 chair:1 ned:10 department:1 developing:1 according:9 mcdonnell:1 belonging:1 across:2 appealing:1 making:1 psychologist:2 memorizing:1 explained:1 equation:4 previously:1 remains:1 ilp:5 needed:3 know:1 mml:1 serf:1 end:2 confounded:1 available:3 naturalness:1 permit:1 alternative:1 magical:1 assumes:2 remaining:3 include:7 ensure:2 top:2 recognizes:1 reconciling:1 newton:1 unifying:1 const:1 build:2 establish:2 comparatively:1 contact:2 question:2 already:1 added:1 strategy:2 reversed:1 entity:1 capacity:1 gracefully:1 seven:2 kemp:1 reason:2 induction:3 length:11 code:1 relationship:3 minimizing:1 statement:1 negative:1 stated:1 upper:1 observation:6 incorrectly:1 supporting:2 relational:5 extended:2 committed:1 rating:6 pred:1 pair:31 cast:1 required:2 philosophical:1 tentative:1 elapsed:1 learned:6 hour:1 trans:5 adult:2 suggested:2 bar:1 below:1 perception:1 pattern:1 belonged:3 challenge:2 program:2 built:1 including:3 memory:4 explanation:1 belief:1 suitable:2 natural:2 rely:1 rated:1 ne:6 created:2 acknowledges:1 transitive:4 prior:6 literature:3 review:3 law:1 expect:1 suf:1 var:1 triple:8 foundation:1 consistent:6 principle:5 myung:1 editor:1 ckemp:1 systematically:1 row:1 austin:1 compatible:3 supported:1 free:1 tribe:3 sym:4 formal:2 side:1 understand:1 guide:1 taking:1 world:1 valid:2 rich:2 evaluating:1 cumulative:1 collection:1 made:4 universally:1 premature:1 founded:1 far:1 social:1 bb:1 observable:1 logic:4 keep:1 b1:2 summing:1 unnecessary:1 assumed:3 conceptual:2 anl:1 search:1 latent:3 nature:1 learn:8 symmetry:3 contributes:1 complex:2 domain:4 quanti:1 arrow:1 noise:1 paul:1 edition:1 child:1 allowed:2 fig:26 screen:1 wiley:1 position:2 explicit:1 candidate:3 perceptual:2 young:1 formula:1 minute:1 symbol:15 explored:1 list:3 consist:1 false:2 adding:1 phd:2 easier:1 explore:3 likely:6 appearance:1 visual:1 partially:1 corresponds:1 relies:2 ma:2 slot:5 marked:1 month:2 towards:2 content:1 change:1 included:3 determined:2 called:1 total:1 pas:1 e:5 experimental:1 unwald:2 argonne:1 exception:8 formally:2 rarely:1 support:4 people:9 relevance:1 philosophy:1 tested:5 |
2,573 | 3,333 | Bayes-Adaptive POMDPs
St?ephane Ross
McGill University
Montr?eal, Qc, Canada
sross12@cs.mcgill.ca
Brahim Chaib-draa
Laval University
Qu?ebec, Qc, Canada
chaib@ift.ulaval.ca
Joelle Pineau
McGill University
Montr?eal, Qc, Canada
jpineau@cs.mcgill.ca
Abstract
Bayesian Reinforcement Learning has generated substantial interest recently, as it
provides an elegant solution to the exploration-exploitation trade-off in reinforcement learning. However most investigations of Bayesian reinforcement learning
to date focus on the standard Markov Decision Processes (MDPs). Our goal is
to extend these ideas to the more general Partially Observable MDP (POMDP)
framework, where the state is a hidden variable. To address this problem, we introduce a new mathematical model, the Bayes-Adaptive POMDP. This new model
allows us to (1) improve knowledge of the POMDP domain through interaction
with the environment, and (2) plan optimal sequences of actions which can tradeoff between improving the model, identifying the state, and gathering reward. We
show how the model can be finitely approximated while preserving the value function. We describe approximations for belief tracking and planning in this model.
Empirical results on two domains show that the model estimate and agent?s return
improve over time, as the agent learns better model estimates.
1
Introduction
In many real world systems, uncertainty can arise in both the prediction of the system?s behavior, and
the observability of the system?s state. Partially Observable Markov Decision Processes (POMDPs)
take both kinds of uncertainty into account and provide a powerful model for sequential decision
making under these conditions. However most solving methods for POMDPs assume that the model
is known a priori, which is rarely the case in practice. For instance in robotics, the POMDP must
reflect exactly the uncertainty on the robot?s sensors and actuators. These parameters are rarely
known exactly and therefore must often be approximated by a human designer, such that even if
this approximate POMDP could be solved exactly, the resulting policy may not be optimal. Thus we
seek a decision-theoretic planner which can take into account the uncertainty over model parameters
during the planning process, as well as being able to learn from experience the values of these
unknown parameters.
Bayesian Reinforcement Learning has investigated this problem in the context of fully observable
MDPs [1, 2, 3]. An extension to POMDP has recently been proposed [4], yet this method relies on
heuristics to select actions that will improve the model, thus forgoing any theoretical guarantee on
the quality of the approximation, and on an oracle that can be queried to provide the current state.
In this paper, we draw inspiration from the Bayes-Adaptive MDP framework [2], which is formulated to provide an optimal solution to the exploration-exploitation trade-off. To extend these ideas
to POMDPs, we face two challenges: (1) how to update Dirichlet parameters when the state is a
hidden variable? (2) how to approximate the infinite dimensional belief space to perform belief
monitoring and compute the optimal policy. This paper tackles both problem jointly. The first problem is solved by including the Dirichlet parameters in the state space and maintaining belief states
over these parameters. We address the second by bounding the space of Dirichlet parameters to a
finite subspace necessary for ?-optimal solutions.
We provide theoretical results for bounding the state space while preserving the value function and
we use these results to derive approximate solving and belief monitoring algorithms. We compare
several belief approximations in two problem domains. Empirical results show that the agent is able
to learn good POMDP models and improve its return as it learns better model estimate.
2
POMDP
A POMDP is defined by finite sets of states S, actions A and observations Z. It has transition
?
?
probabilities {T sas }s,s? ?S,a?A where T sas = Pr(st+1 = s? |st = s, at = a) and observation
probabilities {Osaz }s?S,a?A,z?Z where Osaz = Pr(zt = z|st = s, at?1 = a). The reward function
R : S ? A ? R specifies the immediate reward obtained by the agent. In a POMDP, the state is
never observed. Instead the agent perceives an observation z ? Z at each time step, which (along
with the action sequence) allows it to maintain a belief state b ? ?S. The belief state specifies
the probability of being in each state given the history of action and observation experienced so far,
starting from an initial belief b0 . It can be updated at each time step using Baye?s rule: bt+1 (s? ) =
?a z
t t+1
P
sat s?
bt (s)
s?S T
s?? at zt+1 P
sat s?? b (s)
O
T
??
t
s ?s
s?S
Os
P
.
A policy ? : ?S ? A indicates how the agent should select actions as a function of the current belief.
Solving a POMDP involves finding the optimal policy ? ?
that maximizes the expected discounted return over the infinite horizon. The return obtained byPfollowing ? ? from P
a belief b is defined by Bellman?s
equation: V ? (b) =
?
maxa?A
s?S b(s)R(s, a) + ?
z?Z Pr(z|b, a)V (? (b, a, z)) , where ? (b, a, z) is the new belief after performing action a and observation z and ? ? [0, 1) is the discount factor.
Exact solving algorithms [5] are usually intractable, except on small domains with only a few states,
actions and observations. Various approximate algorithms, both offline [6, 7, 8] and online [9],
have been proposed to tackle increasingly large domains. However, all these methods requires full
knowledge of the POMDP model, which is a strong assumption in practice. Some approaches do
not require knowledge of the model, as in [10], but these approaches generally require a lot of data
and do not address the exploration-exploitation tradeoff.
3
Bayes-Adaptive POMDP
In this section, we introduce the Bayes-Adaptive POMDP (BAPOMDP) model, an optimal decisiontheoretic algorithm for learning and planning in POMDPs under parameter uncertainty. Throughout
we assume that the state, action, and observation spaces are finite and known, but that the transition
and observation probabilities are unknown or partially known. We also assume that the reward
function is known as it is generally specified by the user for the specific task he wants to accomplish,
but the model can easily be generalised to learn the reward function as well.
?
To model the uncertainty on the transition T sas and observation Osaz parameters, we use Dirichlet
distributions, which are probability distributions over the parameters of multinomial distributions.
Given ?i , the number of times event ei has occurred over n trials, the probabilities pi of each event
follow a Dirichlet distribution, i.e. (p1 , . . . , pk ) ? Dir(?1 , . . . , ?k ). This distribution represents
the probability that a discrete random variable behaves according to some probability distribution
Pk
(p1 , . . . , pk ), given that the counts (?1 , . . . , ?k ) have been observed over n trials (n = i=1 ?i ). Its
Q
k
?i ?1
1
probability density function is defined by: f (p, ?) = B(?)
, where B is the multinomial
i=1 pi
beta function. The expected value of pi is E(pi ) =
3.1
?
Pk i
j=1
?j
.
The BAPOMDP Model
The BAPOMDP is constructed from the model of the POMDP with unknown parameters. Let
?
(S, A, Z, T, O, R, ?) be that model. The uncertainty on the distributions T sa? and Os a? can be
represented by experience counts: ?ass? ?s? represents the number of times the transition (s, a, s? ) occurred, similarly ?sa? z ?z is the number of times observation z was made in state s? after doing action
a. Let ? be the vector of all transition counts and ? be the vector of all observation counts. Given
?
?
the count vectors ? and ?, the expected transition probability for T sas is: T?sas =
similarly for O
s? az
:
s? az
O?
=
?sa? z
P
a
?
z ?Z ?s? z ?
P
?a
ss?
?a
ss??
s?? ?S
, and
.
The objective of the BAPOMDP is to learn an optimal policy, such that actions are chosen to
maximize reward taking into account both state and parameter uncertainty. To model this, we
follow the Bayes-Adaptive MDP framework, and include the ? and ? vectors in the state of
the BAPOMDP. Thus, the state space S ? of the BAPOMDP is defined as S ? = S ? T ? O,
P
2
where T = {? ? N|S| |A| |?(s, a), s? ?S ?ass? > 0} represents the space in which ? lies and
P
a
O = {? ? N|S||A||Z| |?(s, a), z?Z ?sz
> 0} represents the space in which ? lies. The action and
observation sets of the BAPOMDP are the same as in the original POMDP. Transition and observation functions of the BAPOMDP must capture how the state and count vectors ?, ? evolve after
every time step. Consider an agent in a given state s with count vectors ? and ?, which performs
action a, causing it to move to state s? and observe z. Then the vector ?? after the transition is defined
a
a
a
as ?? = ? + ?ss
? , where ?ss? is a vector full of zeroes, with a 1 for the count ?ss? , and the vector
?
?
a
a
? after the observation is defined as ? = ? + ?s? z , where ?s? z is a vector full of zeroes, with a 1
for the count ?sa? z . Note that the probabilities of such transitions and observations occurring must
be defined by considering all models and their probabilities as specified by the current Dirichlet
distributions, which turn out to be their expectations. Hence, we define T ? and O? to be:
?
s? az
a
?
a
T?sas O?
, if ?? = ? + ?ss
? and ? = ? + ?s? z
?
?
?
?
T ((s, ?, ?), a, (s , ? , ? )) =
(1)
0,
otherwise.
?
a
a
1, if ?? = ? + ?ss
? and ? = ? + ?s? z
?
?
?
?
O ((s, ?, ?), a, (s , ? , ? ), z) =
(2)
0, otherwise.
Note here that the observation probabilities are folded into the transition function, and that the observation function becomes deterministic. This happens because a state transition in the BAPOMDP
automatically specifies which observation is acquired after transition, via the way the counts are
incremented. Since the counts do not affect the reward, the reward function of the BAPOMDP is defined as R? ((s, ?, ?), a) = R(s, a); the discount factor of the BAPOMDP remains the same. Using
these definitions, the BAPOMDP has a known model specified by the tuple (S ? , A, Z, T ? , O? , R? , ?).
The belief state of the BAPOMDP represents a distribution over both states and count values. The
model is learned by simply maintaining this belief state, as the distribution will concentrate over
most likely models, given the prior and experience so far. If b0 is the initial belief state of the
unknown POMDP, and the count vectors ?0 ? T and ?0 ? O represent the prior knowledge on this
POMDP, then the initial belief of the BAPOMDP is: b?0 (s, ?0 , ?0 ) = {b0 (s), if (?, ?) = (?0 , ?0 );
0, otherwise}. After actions are taken, the uncertainty on the POMDP model is represented by
mixtures of Dirichlet distributions (i.e. mixtures of count vectors).
Note that the BAPOMDP is in fact a POMDP with a countably infinite state space. Hence the belief
update function and optimal value function are still defined as in Section 2. However these functions
now require summations over S ? = S ? T ? O. Maintaining the belief state is practical only if the
number of states with non-zero probabilities is finite. We prove this in the following theorem:
Theorem 3.1. Let (S ? , A, Z, T ? , O? , R? , ?) be a BAPOMDP constructed from the POMDP
(S, A, Z, T, O, R, ?). If S is finite, then at any time t, the set Sb? ? = {? ? S ? |b?t (?) > 0} has
t
size |Sb? ? | ? |S|t+1 .
t
Proof. Proof available in [11]. Proceeds by induction from b?0 .
The proof of this theorem suggests that it is sufficient to iterate over S and Sb? ? in order to compute
t?1
the belief state b?t when an action and observation are taken in the environment. Hence, Algorithm
3.1 can be used to update the belief state.
3.2
Exact Solution for BAPOMDP in Finite Horizons
The value function of a BAPOMDP for finite horizons can be represented by a finite set ? of functions ? : S ? ? R, as in standard POMDP. For example, an exact solution can be computed using
function ? (b, a, z)
Initialize b? as a 0 vector.
for all (s, ?, ?, s? ) ? Sb? ? S do
a
a
? ?
a
a
sas? s? az
b? (s? , ? + ?ss
O?
? , ? + ?s? z ) ? b (s , ? + ?ss? , ? + ?s? z ) + b(s, ?, ?)T?
end for
return normalized b?
Algorithm 3.1: Exact Belief Update in BAPOMDP.
dynamic programming (see [5] for more details):
?a1
?a,z
t
?at
?t
= {?a |?a (s, ?, ?) = R(s, a)},
P
?
s? az ? ?
a
a
?
= {?ia,z |?ia,z (s, ?, ?) = ? s? ?S T?sas O?
?i (s , ? + ?ss
? , ? + ?s? z ), ?i ? ?t?1 },
a,z
a,z
a,z
|Z|
= S
?a1 ? ?t 1 ? ?t 2 ? ? ? ? ? ?t
, (where ? is the cross sum operator),
a
=
a?A ?t .
(3)
Note here that the definition of ?ia,z (s, ?, ?) is obtained from the fact that
a
T ? ((s, ?, ?), a, (s? , ?? , ? ? ))O? ((s, ?, ?), a, (s? , ?? , ? ? ), z) = 0 except when P
?? = ? + ?ss
? and
? ? = ? + ?sa? z . The optimal policy is extracted as usual: ?? (b) = argmax??? ??S ? ?(?)b(?). In
b
practice, it will be impossible to compute ?ia,z (s, ?, ?) for all (s, ?, ?) ? S ? . In order to compute
these more efficiently, we show in the next section that the infinite state space can be reduced to a
finite state space, while still preserving the value function to arbitrary precision for any horizon t.
4
Approximating the BAPOMDP: Theory and Algorithms
Solving a BAPOMDP exactly for all belief states is impossible in practice due to the dimensionnality
of the state space (in particular to the fact that the count vectors can grow unbounded). We now show
how we can reduce this infinite state space to a finite state space. This allows us to compute an ?optimal value function over the resulting finite-dimensionnal belief space using standard POMDP
techniques. Various methods for belief tracking in the infinite model are also presented.
4.1
Approximate Finite Model
We first present an upper bound on the value difference between two states that differ only by
?
their model estimate ? and ?. This bound
the following
definitions: given ?,
uses
? ? T , and
P
P
sas?
saz
sas?
saz
?
sa
?
sa
?
?, ? ? O, define DS (?, ? ) = s? ?S T? ? T?? and DZ (?, ? ) = z?Z O? ? O?
? ,
P
P
a
and N?sa = s? ?S ?ass? and N?sa = z?Z ?sz
.
Theorem 4.1. Given any ?, ?? ? T , ?, ? ? ? O, and ? ? (0,h1), then for all t:
sup
?t ??t ,s?S
|?t (s, ?, ?) ? ?t (s, ?? , ? ? )| ?
2?||R||?
(1??)2
+
sup
s,s? ?S,a?A
4
ln(? ?e )
P
?
sa
DSsa (?, ?? ) + DZ
(?, ? ? )
|
a
?a
s?? ?S ?ss?? ??ss??
sa
sa
(N? +1)(N?? +1)
|
+
|
|
a
?a
z?Z ?s? z ??s? z
?a
?a
s
s
(N? +1)(N?? +1)
P
Proof. Proof available in [11] finds a bound on a 1-step backup and solves the recurrence.
We now use this bound on the ?-vector values to approximate the space of Dirichlet parameters
?(1??)2
within a finite subspace. We use the following definitions: given any ? > 0, define ?? = 8?||R||
,
?
?(1??)2 ln(? ?e )
|S|(1+?? ) 1
|Z|(1+?? ) 1
??
?
?
? = 32?||R||? , NS = max
, ??? ? 1 and NZ = max
, ??? ? 1 .
??
??
?
Theorem 4.2. Given any ? > 0 and (s, ?, ?) ? S ? such that ?a ? A, s? ? S, N?s a > NS? or
?
?
?
N?s a > NZ? , then ?(s, ?? , ? ? ) ? S ? such that ?a ? A, s? ? S, N?s? a ? NS? and N?s ?a ? NZ? where
|?t (s, ?, ?) ? ?t (s, ?? , ? ? )| < ? holds for all t and ?t ? ?t .
Proof. Proof available in [11].
Theorem 4.2 suggests that if we want a precision of ? on the value function, we just need to restrict
2
the space of Dirichlet parameters to count vectors ? ? T?? = {? ? N|S| |A| |?a ? A, s ? S, 0 <
?? = {? ? N|S||A||Z| |?a ? A, s ? S, 0 < N sa ? N ? }. Since T?? and O
?? are
N?sa ? NS? } and ? ? O
Z
?
?? , R
? ? , ?) where
finite, we can define a finite approximate BAPOMDP as the tuple (S?? , A, Z, T?? , O
?
?
?
S? = S ? T? ? O? is the finite state space. To define the transition and observation functions over
that finite state space, we need to make sure that when the count vectors are incremented, they stay
within the finite space. To achieve, this we define a projection operator P? : S ? ? S?? that simply
projects every state in S ? to their closest state in S?? .
Definition 4.1. Let d : S ? ? S ? ? R be definedh such that:
? 2?||R||
?
s? a
?
DSsa (?, ?? ) + DZ
(?, ? ? )
? (1??)2 ? sup
?
s,s
?S,a?A
?
P
?
if s = s?
P
a
?a
|?sa? z ??s?a? z |
?
?
?
?? ?S |?ss?? ??ss?? |
4
z?Z
s
d(s, ?, ?, s , ? , ? ) =
,
+ ln(? ?e )
(N?as +1)(N?as
+1) + (N as? +1)(N as? ? +1)
?
?
?
?
?
?
?
2||R||?
? 8?||R||? 1 + 4
otherwise.
(1??)2
ln(? ?e ) + (1??) ,
Definition 4.2. Let P? : S ? ? S?? be defined as P? (s) = arg min d(s, s? )
??
s? ?S
The function d uses the bound defined in Theorem 4.1 as a distance between states that only differs
by their ? and ? vectors, and uses an upper bound on that value when the states differ. Thus
P? always maps states (s, ?, ?) ? S ? to some state (s, ?? , ? ? ) ? S?? . Note that if ? ? S?? , then
P? (?) = ?. Using P? , the transition and observation function are defined as follows:
?
s? az
a
a
T?sas O?
, if (s? , ?? , ? ? ) = P? (s? , ? + ?ss
? , ? + ? s? z )
?
?
?
?
T? ((s, ?, ?), a, (s , ? , ? )) =
(4)
0,
otherwise.
a
a
1, if (s? , ?? , ? ? ) = P? (s? , ? + ?ss
? , ? + ? s? z )
?
?
?
?
O? ((s, ?, ?), a, (s , ? , ? ), z) =
(5)
0, otherwise.
These definitions are the same as the one in the infinite BAPOMDP, except that now we add an extra
projection to make sure that the incremented count vectors stays in S?? . Finally, the reward function
? ? : S?? ? A ? R is defined as R
? ? ((s, ?, ?), a) = R(s, a).
R
Theorem 4.3 bounds the value difference between ?-vectors computed with this finite model and
the ?-vector computed with the original model.
Theorem 4.3. Given any ? > 0, (s, ?, ?) ? S ? and ?t ? ?t computed from the infinite BAPOMDP.
Let ?
? t be the ?-vector representing the same conditionnal plan as ?t but computed with the finite
?
?? , R
? ? , ?), then |?
BAPOMDP (S?? , A, Z, T?? , O
?t (P? (s, ?, ?)) ? ?t (s, ?, ?)| < 1??
.
Proof. Proof available in [11]. Solves a recurrence over the 1-step approximation in Thm. 4.2.
Because the state space is now finite, solution methods from the literature on finite POMDPs could
theoretically be applied. This includes en particular the equations for ? (b, a, z) and V ? (b) that were
presented in Section 2. In practice however, even though the state space is finite, it will generally
be very large for small ?, such that it may still be intractable, even for small domains. We therefore
favor a faster online solution approach, as described below.
4.2
Approximate Belief Monitoring
As shown in Theorem 3.1, the number of states with non-zero probability grows exponentially in
the planning horizon, thus exact belief monitoring can quickly become intractable. We now discuss
different particle-based approximations that allow polynomial-time belief tracking.
Monte Carlo sampling: Monte Carlo sampling algorithms have been widely used for sequential
state estimation [12]. Given a prior belief b, followed by action a and observation z, the new belief
b? is obtained by first sampling K states from the distribution b, then for each sampled s a new state
s? is sampled from T (s, a, ?). Finally, the probability O(s? , a, z) is added to b? (s? ) and the belief b?
is re-normalized. This will capture at most K states with non-zero probabilities. In the context of
BAPOMDPs, we use a slight variation of this method, where (s, ?, ?) are first sampled from b, and
?az
then a next state s? ? S is sampled from the normalized distribution T?sa? O?
. The probability 1/K
? ?
a
a
is added directly to b (s , ? + ?ss? , ? + ?s? z ).
Most Probable: Alternately, we can do the exact belief update at a given time step, but then only
keep the K most probable states in the new belief b? and renormalize b? .
Weighted Distance Minimization: The two previous methods only try to approximate the distribution ? (b, a, z). However, in practice, we only care most about the agent?s expected reward. Hence,
instead of keeping the K most likely states, we can keep K states which best approximate the belief?s value. As in the Most Probable method, we do an exact belief update, however in this case
we fit the posterior distribution using a greedy K-means procedure, where distance is defined as in
Definition 4.1, weighted by the probability of the state to remove. See [11] for algorithmic details.
4.3
Online planning
While the finite model presented in Section 4.1 can be used to find provably near-optimal policies
offline, this will likely be intractable in practice due to the very large state space required to ensure
good precision. Instead, we turn to online lookahead search algorithms, which have been proposed
for solving standard POMDPs [9]. Our approach simply performs dynamic programming over all the
beliefs reachable within some fixed finite planning horizon from the current belief. The action with
highest return over that finite horizon is executed and then planning is conducted again on the next
belief. To further limit the complexity of the online planning algorithm, we used the approximate
belief monitoring methods detailed above. Its overall complexity is in O((|A||Z|)D Cb ) where D is
the planning horizon and Cb is the complexity of updating the belief.
5
Empirical Results
We begin by evaluating the different belief approximations introduced above. To do so, we use a
simple online d-step lookahead search, and compare the overall expected return and model accuracy
?
in two different problems: the well-known Tiger [5] and a new domain called Follow. Given T sas
?
and Os az the exact probabilities of the (unknown) POMDP, the model accuracy is measured in
terms of the weighted sum of L1-distance, denoted W L1, between the exact model and the probable
models in a belief state b:
P
W L1(b) =
?, ?)L1(?, ?)
(s,?,?)?Sb? b(s,
hP
i
P
P
P
(6)
sas?
sas?
s? az
s? az
L1(?, ?) =
|T
?
T
|
+
|O
?
O
|
?
?
?
a?A
s ?S
s?S
z?Z
5.1
Tiger
In the Tiger problem [5], we consider the case where the transition and reward parameters are known,
but the observation probabilities are not. Hence, there are four unknown parameters: OLl , OLr ,
ORl , ORr (OLr stands for Pr(z = hear right|s = tiger Lef t, a = Listen)). We define the
observation count vector ? = (?Ll , ?Lr , ?Rl , ?Rr ). We consider a prior of ?0 = (5, 3, 3, 5), which
specifies an expected sensor accuracy of 62.5% (instead of the correct 85%) in both states. Each
simulation consists of 100 episodes. Episodes terminate when the agent opens a door, at which
point the POMDP state (i.e. tiger?s position) is reset, but the distribution over count vector is carried
over to the next episode.
Figures 1 and 2 show how the average return and model accuracy evolve over the 100 episodes
(results are averaged over 1000 simulations), using an online 3-step lookahead search with varying
belief approximations and parameters. Returns obtained by planning directly with the prior and exact model (without learning) are shown for comparison. Model accuracy is measured on the initial
belief of each episode. Figure 3 compares the average planning time per action taken by each approach. We observe from these figures that the results for the Most Probable and Weighted Distance
approximations are very similar and perform well even with few particles (lines are overlapping in
many places, making Weighted Distance results hard to see). On the other hand, the performance
of Monte Carlo is significantly affected by the number of particles and had to use much more par-
2 Exact model
20
Most Probable (2)
Monte Carlo (64)
Weighted Distance (2)
0.8
WL1
Return
0
?1
0.6
0.4
?2
Prior model
Most Probable (2)
Monte Carlo (64)
Weighted Distance (2)
?3
?4
0
20
40
60
Episode
80
100
0.2
0
0
20
40
60
Episode
80
100
Planning Time/Action (ms)
1
1
15
10
5
0
MP (2)
MC (64)
WD (2)
Figure 1: Return with different Figure 2: Model accuracy with Figure 3: Planning Time with
belief approximations.
different belief approximations. different belief approximations.
ticles (64) to obtain an improvement over the prior. This may be due to the sampling error that is
introduced when using fewer samples.
5.2
Follow
We propose a new POMDP domain, called Follow, inspired by an interactive human-robot task. It
is often the case that such domains are particularly subject to parameter uncertainty (due to the difficulty in modelling human behavior), thus this environment motivates the utility of Bayes-Adaptive
POMDP in a very practical way. The goal of the Follow task is for a robot to continuously follow one
of two individuals in a 2D open area. The two subjects have different motion behavior, requiring the
robot to use a different policy for each. At every episode, the target person is selected randomly with
P r = 0.5 (and the other is not present). The person?s identity is not observable (except through their
motion). The state space has two features: a binary variable indicating which person is being followed, and a position variable indicating the person?s position relative to the robot (5 ? 5 square grid
with the robot always at the center). Initially, the robot and person are at the same position. Both the
robot and the person can perform five motion actions {N oAction, N orth, East, South, W est}.
The person follows a fixed stochastic policy (stationary over space and time), but the parameters of
this behavior are unknown. The robot perceives observations indicating the person?s position relative to the robot: {Same, N orth, East, South, W est, U nseen}. The robot perceives the correct
observation P r = 0.8 and U nseen with P r = 0.2. The reward R = +1 if the robot and person
are at the same position (central grid cell), R = 0 if the person is one cell away from the robot, and
R = ?1 if the person is two cells away. The task terminates if the person reaches a distance of 3
cells away from the robot, also causing a reward of -20. We use a discount factor of 0.9.
When formulating the BAPOMDP, the robot?s motion model (deterministic), the observation
probabilities and the rewards are assumed to be known. We maintain a separate count vector for each person, representing the number of times they move in each direction, i.e. ?1 =
(?1N A , ?1N , ?1E , ?1S , ?1W ), ?2 = (?2N A , ?2N , ?2E , ?2S , ?2W ). We assume a prior ?10 = (2, 3, 1, 2, 2)
for person 1 and ?20 = (2, 1, 3, 2, 2) for person 2, while in reality person 1 moves with probabilities
P r = (0.3, 0.4, 0.2, 0.05, 0.05) and person 2 with P r = (0.1, 0.05, 0.8, 0.03, 0.02). We run 200
simulations, each consisting of 100 episodes (of at most 10 time steps). The count vectors? distributions are reset after every simulation, and the target person is reset after every episode. We use a
2-step lookahead search for planning in the BAPOMDP.
Figures 4 and 5 show how the average return and model accuracy evolve over the 100 episodes (averaged over the 200 simulations) with different belief approximations. Figure 6 compares the planning
time taken by each approach. We observe from these figures that the results for the Weighted Distance approximations are much better both in terms of return and model accuracy, even with fewer
particles (16). Monte Carlo fails at providing any improvement over the prior model, which indicates it would require much more particles. Running Weighted Distance with 16 particles require
less time than both Monte Carlo and Most Probable with 64 particles, showing that it can be more
time efficient for the performance it provides in complex environment.
2
200
2
1.5
?2
Most Probable (64)
Monte Carlo (64)
Weighted Distance (16)
?4
Prior model
?6
?8
0
20
40
60
Episode
80
100
WL1
Return
0
Most Probable (64)
Monte Carlo (64)
Weighted Distance (16)
1
0.5
0
0
20
40
60
Episode
80
100
Planning Time/Action (ms)
Exact model
150
100
50
0
MP (64)
MC (64)
WD (16)
Figure 4: Return with different Figure 5: Model accuracy with Figure 6: Planning Time with
belief approximations.
different belief approximations. different belief approximations.
6
Conclusion
The objective of this paper was to propose a preliminary decision-theoretic framework for learning
and acting in POMDPs under parameter uncertainty. This raises a number of interesting challenges,
including (1) defining the appropriate model for POMDP parameter uncertainty, (2) approximating
this model while maintaining performance guarantees, (3) performing tractable belief updating, and
(4) planning action sequences which optimally trade-off exploration and exploitation.
We proposed a new model, the Bayes-Adaptive POMDP, and showed that it can be approximated
to ?-precision by a finite POMDP. We provided practical approaches for belief tracking and online
planning in this model, and validated these using two experimental domains. Results in the Follow
problem, showed that our approach is able to learn the motion patterns of two (simulated) individuals. This suggests interesting applications in human-robot interaction, where it is often essential that
we be able to reason and plan under parameter uncertainty.
Acknowledgments
This research was supported by the Natural Sciences and Engineering Research Council of Canada
(NSERC) and the Fonds Qu?eb?ecois de la Recherche sur la Nature et les Technologies (FQRNT).
References
[1] R. Dearden, N. Friedman, and N. Andre. Model based bayesian exploration. In UAI, 1999.
[2] M. Duff. Optimal Learning: Computational Procedure for Bayes-Adaptive Markov Decision Processes.
PhD thesis, University of Massachusetts, Amherst, USA, 2002.
[3] P. Poupart, N. Vlassis, J. Hoey, and K. Regan. An analytic solution to discrete bayesian reinforcement
learning. In Proc. ICML, 2006.
[4] R. Jaulmes, J. Pineau, and D. Precup. Active learning in partially observable markov decision processes.
In ECML, 2005.
[5] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic
domains. Artificial Intelligence, 101:99?134, 1998.
[6] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: an anytime algorithm for POMDPs. In
IJCAI, pages 1025?1032, Acapulco, Mexico, 2003.
[7] M. Spaan and N. Vlassis. Perseus: randomized point-based value iteration for POMDPs. JAIR, 24:195?
220, 2005.
[8] T. Smith and R. Simmons. Heuristic search value iteration for POMDPs. In UAI, Banff, Canada, 2004.
[9] S. Paquet, L. Tobin, and B. Chaib-draa. An online POMDP algorithm for complex multiagent environments. In AAMAS, 2005.
[10] Jonathan Baxter and Peter L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial
Intelligence Research (JAIR), 15:319?350, 2001.
[11] St?ephane Ross, Brahim Chaib-draa, and Joelle Pineau. Bayes-adaptive pomdps. Technical Report SOCSTR-2007.6, McGill University, 2007.
[12] A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods In Practice. Springer, 2001.
| 3333 |@word trial:2 exploitation:4 polynomial:1 open:2 seek:1 simulation:5 initial:4 freitas:1 current:4 wd:2 yet:1 must:4 analytic:1 remove:1 update:6 stationary:1 greedy:1 fewer:2 selected:1 intelligence:2 smith:1 recherche:1 lr:1 provides:2 banff:1 five:1 unbounded:1 mathematical:1 along:1 constructed:2 beta:1 become:1 prove:1 consists:1 introduce:2 theoretically:1 acquired:1 expected:6 wl1:2 p1:2 planning:20 behavior:4 bellman:1 inspired:1 discounted:1 automatically:1 considering:1 perceives:3 becomes:1 project:1 begin:1 provided:1 maximizes:1 kind:1 maxa:1 perseus:1 finding:1 guarantee:2 every:5 tackle:2 ebec:1 interactive:1 exactly:4 generalised:1 engineering:1 limit:1 nz:3 eb:1 suggests:3 averaged:2 practical:3 acknowledgment:1 practice:8 differs:1 procedure:2 area:1 empirical:3 significantly:1 projection:2 operator:2 context:2 impossible:2 deterministic:2 map:1 dz:3 center:1 starting:1 pomdp:31 qc:3 identifying:1 rule:1 variation:1 updated:1 mcgill:5 target:2 simmons:1 user:1 exact:12 programming:2 us:3 approximated:3 particularly:1 updating:2 observed:2 solved:2 capture:2 episode:13 trade:3 incremented:3 highest:1 substantial:1 environment:5 complexity:3 reward:14 littman:1 dynamic:2 raise:1 solving:6 easily:1 various:2 represented:3 describe:1 monte:10 artificial:2 heuristic:2 widely:1 s:18 otherwise:6 favor:1 paquet:1 jointly:1 online:9 sequence:3 rr:1 propose:2 interaction:2 reset:3 causing:2 date:1 achieve:1 lookahead:4 az:10 ijcai:1 derive:1 measured:2 finitely:1 b0:3 sa:30 solves:2 strong:1 c:2 involves:1 differ:2 concentrate:1 direction:1 correct:2 stochastic:2 exploration:5 human:4 require:5 brahim:2 investigation:1 preliminary:1 probable:10 acapulco:1 summation:1 extension:1 hold:1 cb:2 algorithmic:1 estimation:2 proc:1 ross:2 council:1 weighted:11 minimization:1 sensor:2 always:2 varying:1 validated:1 focus:1 improvement:2 modelling:1 indicates:2 sb:5 bt:2 initially:1 hidden:2 provably:1 arg:1 overall:2 denoted:1 priori:1 plan:3 initialize:1 never:1 sampling:4 represents:5 icml:1 ephane:2 report:1 gordon:2 few:2 randomly:1 individual:2 argmax:1 consisting:1 maintain:2 friedman:1 montr:2 interest:1 mixture:2 tuple:2 necessary:1 experience:3 draa:3 re:1 renormalize:1 theoretical:2 instance:1 eal:2 kaelbling:1 conducted:1 optimally:1 dir:1 accomplish:1 st:5 density:1 person:18 amherst:1 randomized:1 stay:2 off:3 quickly:1 continuously:1 precup:1 again:1 reflect:1 central:1 thesis:1 return:15 forgoing:1 account:3 de:2 orr:1 includes:1 mp:2 h1:1 lot:1 try:1 doing:1 sup:3 bayes:10 square:1 accuracy:9 efficiently:1 bayesian:5 mc:2 carlo:10 monitoring:5 pomdps:12 history:1 reach:1 andre:1 definition:8 proof:9 chaib:4 sampled:4 massachusetts:1 knowledge:4 listen:1 anytime:1 jair:2 follow:8 though:1 just:1 olr:2 d:1 hand:1 ei:1 o:3 overlapping:1 pineau:4 quality:1 mdp:3 grows:1 usa:1 normalized:3 requiring:1 hence:5 inspiration:1 ll:1 during:1 recurrence:2 ulaval:1 m:2 theoretic:2 performs:2 l1:5 motion:5 recently:2 behaves:1 multinomial:2 laval:1 rl:1 exponentially:1 extend:2 he:1 occurred:2 slight:1 queried:1 grid:2 similarly:2 hp:1 particle:7 had:1 reachable:1 robot:16 add:1 closest:1 posterior:1 showed:2 fqrnt:1 binary:1 joelle:2 preserving:3 care:1 maximize:1 full:3 technical:1 faster:1 cross:1 a1:2 prediction:1 expectation:1 iteration:3 represent:1 robotics:1 cell:4 want:2 grow:1 extra:1 sure:2 south:2 subject:2 elegant:1 tobin:1 near:1 door:1 jaulmes:1 baxter:1 iterate:1 affect:1 fit:1 restrict:1 observability:1 idea:2 reduce:1 tradeoff:2 utility:1 bartlett:1 peter:1 action:22 generally:3 detailed:1 discount:3 reduced:1 specifies:4 designer:1 per:1 discrete:2 affected:1 four:1 sum:2 run:1 uncertainty:13 powerful:1 place:1 planner:1 throughout:1 draw:1 decision:7 orl:1 bound:7 followed:2 oracle:1 min:1 formulating:1 performing:2 according:1 terminates:1 increasingly:1 spaan:1 qu:2 making:2 happens:1 pr:4 gathering:1 hoey:1 taken:4 ln:4 equation:2 remains:1 turn:2 count:22 discus:1 tractable:1 end:1 available:4 actuator:1 observe:3 away:3 appropriate:1 original:2 dirichlet:9 include:1 ensure:1 running:1 maintaining:4 approximating:2 objective:2 move:3 added:2 usual:1 gradient:1 subspace:2 distance:13 separate:1 simulated:1 thrun:1 poupart:1 reason:1 induction:1 sur:1 providing:1 mexico:1 executed:1 zt:2 policy:10 unknown:7 perform:3 motivates:1 upper:2 observation:27 dimensionnality:1 markov:4 finite:27 ecml:1 immediate:1 defining:1 vlassis:2 duff:1 arbitrary:1 thm:1 canada:5 introduced:2 required:1 specified:3 learned:1 alternately:1 address:3 able:4 proceeds:1 usually:1 below:1 pattern:1 challenge:2 hear:1 including:2 max:2 belief:52 dearden:1 ia:4 event:2 difficulty:1 natural:1 representing:2 jpineau:1 improve:4 technology:1 mdps:2 nseen:2 carried:1 prior:10 literature:1 evolve:3 relative:2 fully:1 par:1 multiagent:1 interesting:2 regan:1 agent:9 sufficient:1 pi:4 ift:1 supported:1 keeping:1 lef:1 offline:2 allow:1 face:1 taking:1 world:1 transition:15 evaluating:1 stand:1 made:1 adaptive:10 reinforcement:5 far:2 approximate:11 observable:6 countably:1 keep:2 sz:2 doucet:1 active:1 uai:2 sat:2 assumed:1 search:5 reality:1 learn:5 terminate:1 nature:1 ca:3 improving:1 as:3 investigated:1 complex:2 domain:11 pk:4 bounding:2 backup:1 arise:1 aamas:1 en:1 n:4 precision:4 experienced:1 position:6 orth:2 fails:1 lie:2 learns:2 theorem:10 specific:1 showing:1 oll:1 intractable:4 essential:1 sequential:3 phd:1 occurring:1 fonds:1 horizon:9 cassandra:1 simply:3 likely:3 nserc:1 tracking:4 partially:5 springer:1 relies:1 extracted:1 goal:2 formulated:1 identity:1 tiger:5 hard:1 infinite:9 except:4 folded:1 acting:2 called:2 experimental:1 la:2 est:2 east:2 rarely:2 select:2 decisiontheoretic:1 indicating:3 jonathan:1 |
2,574 | 3,334 | Estimating disparity with confidence from energy
neurons
Eric K. C. Tsang
Dept. of Electronic and Computer Engr.
Hong Kong Univ. of Sci. and Tech.
Kowloon, HONG KONG SAR
eeeric@ee.ust.hk
Bertram E. Shi
Dept. of Electronic and Computer Engr.
Hong Kong Univ. of Sci. and Tech.
Kowloon, HONG KONG SAR
eebert@ee.ust.hk
Abstract
The peak location in a population of phase-tuned neurons has been shown to be a
more reliable estimator for disparity than the peak location in a population of
position-tuned neurons. Unfortunately, the disparity range covered by a phasetuned population is limited by phase wraparound. Thus, a single population cannot cover the large range of disparities encountered in natural scenes unless the
scale of the receptive fields is chosen to be very large, which results in very low
resolution depth estimates. Here we describe a biologically plausible measure of
the confidence that the stimulus disparity is inside the range covered by a population of phase-tuned neurons. Based upon this confidence measure, we propose an
algorithm for disparity estimation that uses many populations of high-resolution
phase-tuned neurons that are biased to different disparity ranges via position
shifts between the left and right eye receptive fields. The population with the
highest confidence is used to estimate the stimulus disparity. We show that this
algorithm outperforms a previously proposed coarse-to-fine algorithm for disparity estimation, which uses disparity estimates from coarse scales to select the
populations used at finer scales and can effectively detect occlusions.
1 Introduction
Binocular disparity, the displacement between the image locations of an object between two eyes or
cameras, is an important depth cue. Mammalian brains appear to represent the stimulus disparity
using populations of disparity-tuned neurons in the visual cortex [1][2]. The binocular energy
model is a first order model that explains the responses of individual disparity-tuned neurons [3]. In
this model, the preferred disparity tuning of the neurons is determined by the phase and position
shifts between the left and right monocular receptive fields (RFs).
Peak picking is a common disparity estimation strategy for these neurons([4]-[6]). In this strategy,
the disparity estimates are computed by the preferred disparity of the neuron with the largest
response among the neural population. Chen and Qian [4] have suggested that the peak location in
a population of phase-tuned disparity energy neurons is a more reliable estimate than the peak location in a population of position-tuned neurons.
It is difficult to estimate disparity from a single phase-tuned neuron population because its range of
preferred disparities is limited. Figure 1 shows the population response of phase-tuned neurons
(vertical cross section) for different stimulus disparities. If the stimulus disparity is confined to the
range of preferred disparities of this population, the peak location changes linearly with the stimulus disparity. Thus, we can estimate the disparity from the peak. However, in natural viewing condition, the stimulus disparity ranges over ten times larger than the range of the preferred disparities of
the population [7]. The peak location no longer indicates the stimulus disparity, since the peaks still
occur even when the stimulus disparity is outside the range of neurons? preferred disparities. The
false peaks arise from two sources: the phase wrap-around due to the sinusoidal modulation in the
D pref
5
0
-5
-40
-30
-20
-10
0
10
20
30
40
stimulus disparity (pixels)
Fig. 1: Sample population responses of the phase-tuned disparity neurons for different disparities.
This was generated by presenting the left image of the ?Cones? stereogram shown in Figure 5a to
both eyes but varying the disparity by keeping the left image fixed and shifting the right image. At
each point, the image intensity represents the response of a disparity neuron tuned to a fixed
preferred disparity (vertical axis) in response to a fixed stimulus disparity (horizontal axis). The
dashed vertical lines indicate the stimulus disparities that fall within the range of preferred
disparities of the population ( ? 8 pixels).
Gabor function modelling neuron?s receptive field (RF) profile, or unmatching edges entering the
neuron's RF [5].
Although a single population can cover a large disparity range, the large size of the required receptive fields results in very low resolution depth estimates. To address this problem, Chen and Qian
[4] proposed a coarse-to-fine algorithm which refines the estimates computed from coarse scales
using populations tuned to finer scales.
Here we present an alternative way to estimate the stimulus disparity using a biologically plausible
confidence measure that indicates whether the stimulus disparity lies inside or outside the range of
preferred disparities in a population of phase tuned neurons. We motivate this measure by examining the empirical statistics of the model neuron responses on natural images. Finally, we demonstrate the efficacy of using this measure to estimate the stimulus disparity. Our model generates
better estimates than the coarse-to-fine approach [4], and can detect occlusions.
2 Features of the phase-tuned disparity population
In this section, we define different features of a population of phase-tuned neurons. These features
will be used to define the confidence measure. Figure 2a illustrates the binocular disparity energy
model of a phase-tuned neuron [3]. For simplicity, we assume 1D processing, which is equivalent
to considering one orientation in the 2D case. The response of a binocular simple cell is modelled
by summing of the outputs of linear monocular Gabor filters applied to both left and right images,
followed by a positive or negative half squaring nonlinearity. The response of a binocular complex
cell is the sum of the four simple cell responses.
Formally, we define the left and right retinal images by U l(x) and U r(x) , where x denotes the distance from the RF center. The disparity d is the difference between the locations of corresponding
points in the left and right images, i.e., an object that appears at point x + d in the left image
appears at point x in the right image. Pairs of monocular responses are generated by integrating
image intensities weighted by pairs of phase quadrature RF profiles, which are the real and imaginary parts of a complex-valued Gabor function ( j = ? 1 ):
h(x, ?) = g(x)e j ( ?x + ? ) = g(x) cos ( ?x + ? ) + jg(x) sin ( ?x + ? )
(1)
where ? and ? are the spatial frequency and the phase of the left and right monocular RFs, and
g(x) is a zero mean Gaussian with standard deviation ? , which is inversely proportional to the spatial frequency bandwidth. The spatial frequency and the standard deviation of the left and right RFs
are identical, but the phases may differ ( ? l and ? r ). We can compactly express the pairs of left
and right monocular responses as the real and imaginary parts of V l(? l) = V l e j? l and
j?
V r(? r) = V r e r , where with a slight abuse of notation, we define
Vl =
? g(x)e j?x Ul(x) dx and Vr = ? g(x)e j?x Ur(x) dx
(2)
(a)
U l(x)
h(x, ? l)
Re.
h(x, ? r)
?
E d(??)
Re.
?
Im.
U r(x)
(b)
Half Squaring
P
S
E d(??)
?
??
??
?
??
Im.
? Binocular
? l = 0 ? r = --- Simple
Cells
2
Binocular
Complex Cell
Fig. 2: (a) Binocular disparity energy model of a disparity neuron in the phase-shift mechanism.
The phase-shift ? r ? ? l between the left and right monocular RFs determines the preferred
disparity of the neuron. The neuron shown is tuned to a negative disparity of ? ? ? ( 2? ) . (b) The
population response of the phase-tuned neurons E d(??) centered at a retinal location with the
phase-shifts ?? ? [ ? ?, ? ] can be characterized by three features S, P and ?? .
The response of the binocular complex cell (the disparity energy) is the squared modulus of the
sum of the monocular responses:
E d(??) = V l e
j? l
+ Vr e
j? r 2
= V l 2 + V l V r* e ? j ?? + V l* V r e j?? + V r 2
(3)
where the * superscript indicates the complex conjugation. The phase-shift between the right and
left neurons ?? = ? r ? ? l controls the preferred disparity D pref ( ?? ) ? ? ?? ? ? of the binocular
complex cell [6].
If we fix the stimulus and allow ?? to vary between ? ? , the function E d(??) in (3) describes the
population response of phase-tuned neurons whose preferred disparities range between ? ? ? ? and
? ? ? . The population response can be completely specified by three features S , P and ?? [4][5].
E d(??) = S + P cos ( ?? ? ?? )
(4)
where
S = Vl 2 + Vr 2
P = 2 V l V r = 2 V l V r*
(5)
?? = ? l ? ? r = arg ( V l V r* )
Figure 2b shows the graphical interpretation of these features. The feature S is the average
response across the population. The feature P is the difference between the peak and average
responses. Note that S ? P , since S ? P = ( V l ? V r ) 2 > 0 . The feature ?? is the peak location
of the population response. Peak picking algorithms compute the estimates from the peak location,
i.e. d est = ? ?? ? ? [6].
3 Feature Analysis
In this section, we suggest a simple confidence measure that can be used to differentiate between
two classes of stimulus disparities: DIN and DOUT corresponding to stimulus disparities inside
( d ? ? ? ? ) and outside ( d > ? ? ? ) the range of preferred disparities in the population.
We find this confidence measure by analyzing the empirical joint densities of S and the ratio
R = P ? S conditioned on the two disparity classes. Considering S and R is equivalent to considering S and P . We ignore ?? . Intuitively, the peak location ?? will be less effective in distin-
(a)
(b)
1
1
0.8
0.8
0.9
0.6
0.6
0.8
R
R
0.4
(c)
(d)
-3
8
6
?P e 4
R 0.7
0.4
2
0.6
0.2
0
0.2
5
10
S
15
20
0
0.5
5
10
S
15
20
x 10
5
10
S
15
20
0
0.1
0.2
0.3
P [ DIN ]
0.4
0.5
Fig. 3: The empirical joint density of S and R given (a) DIN and (b) DOUT. Red indicates large
values. Blue indicates small values. (c) The optimal decision boundaries derived from the Bayes
factor. (d) The change in total probability of error ?P e between using a flat boundary (thresholding
R ) versus the optimal boundary.
guishing between DIN and DOUT, since Figure 1 shows that the phase ranges between ? ? and ?
for both disparity classes. The ratio R is bounded between 0 and 1 , since S ? P .
Because of the uncertainties in the natural scenes, the features S and R are random variables. In
making a decision based on random features, Bayesian classifiers minimize the classification error.
Bayesian classifiers compare the conditional probabilities of the two disparity classes (DIN and
DOUT) given the observed feature values. The decision can be specified by thresholding the Bayes
factor.
f S, R C ( s, r DIN ) DIN
B S, R = ---------------------------------------------- <
> T S, R
f S, R C ( s, r DOUT ) DOUT
(6)
where the threshold T S, R controls the location of the decision boundary in the feature space
{ S, R } and depends upon the prior class probabilities P [ DIN ] and P [ DOUT ] . The function
f S, R C(s, r c) is the conditional density of the features given the class c ? { DIN, DOUT } .
To find the optimal decision boundary for the features S and R , we estimated the joint class likelihood f S, R C(s, r c) from data obtained using the ?Cones? and the ?Teddy? stereograms from Middlebury College [8][9], shown in Figure 5a. The stereograms are rectified, so that the
correspondences are located in the same horizontal scan-lines. Each image has 1500 x 1800 pixels.
We constructed a population of phase-tuned neurons at each pixel. The disparity neurons had the
same spatial frequency and standard deviation, and were selective to vertical orientations. The spatial frequency was ? = 2? ? 16 radians per pixel and the standard deviation in the horizontal
direction was ? = 6.78 pixels, corresponding to a spatial bandwidth of 1.8 octaves. The standard
deviation in the vertical direction was 2? . The range of the preferred disparities (DIN) of the population is between ? 8 pixels. To reduce the variability in the classification, we also applied Gaussian spatial pooling with the standard deviation 0.5? to the population [4][5]. The features S and R
computed from population were separated into two classes (DIN and DOUT) according to the
ground truth in Figure 5b.
Figure 3a-b show the empirically estimated joint conditional densities for the two disparity classes.
They were computed by binning the features S and R with the bin sizes of 0.25 for S and 0.01 for
R . Given the disparity within the range of preferred disparities (DIN), the joint density concentrates at small S and large R . For the out-of-range disparities (DOUT), the joint density shifts to
both large S and small R . Intuitively, a horizontal hyperplane, illustrated by the red dotted line in
Figure 3a-b, is an appropriate decision boundary to separate the DIN and DOUT data. This indicates that the feature R can be an indicator to distinguish between the in-range and out-of-range
disparities. Mathematically, we can compute the optimal decision boundaries by applying different
thresholds to the Bayes factor in (6). Figure 3c shows the boundaries. They are basically flat except
at small S .
We also demonstrate the efficacy of thresholding R instead of using the optimal decision boundaries to distinguish between in-range and out-of-range disparities. Given the prior class probability
phase tuned population
E d(??)
R 128, ?? 128
?c = 128
E d(??)
?c = 0
E d(??)
Winner take all
U l(x)
R ?c*
?? ?c*
R > TR
DIN
/DOUT
d est
U r(x)
?c = ? 128
Fig. 4: Proposed disparity estimator with the validation of disparity estimates.
P [ DIN ] , we compute a threshold c ? [ 0, 1 ] that minimizes the total probability of classification
error:
P e = P [ DIN ]
?
f S, R C ( s, r DIN ) + ( 1 ? P [ DIN ] )
R<c
?
f S, R C ( s, r DOUT )
(7)
R>c
We then compare this total probability of error with the one computed using the optimal decision
boundaries derived in (6). Figure 3d shows the deviation in the total probability of error between
?2
the two approaches versus P [ DIN ] . The deviation is small (on the order of 10 ) suggesting that
thresholding R results in similar performance as using the optimal decision boundaries. Thus, R
can be used as a confidence measure for distinguishing DIN and DOUT. Moreover, this measure
can be computed by normalization, which is a common component in models for V1 neurons [11].
4 Hybrid position-phase model for disparity estimation with validation
Our analysis above shows that R is a simple indicator to distinguish between in-range and out-ofrange disparities. In this section, we describe a model that uses this feature to estimate the stimulus
disparity with validation.
Figure 4 shows the proposed model, which consists of populations of hybrid tuned disparity neurons tuned to different phase-shifts ?? and position-shifts ?c . For each population tuned to the
same position-shift but different phase-shifts (phase-tuned population), we compute the ratio
R ?c = P ?c ? S ?c . The average activation S ?c can be computed by pooling the responses of the
entire phase-tuned neurons. The feature P ?c can be computed by subtracting the peak response
S ?c + P ?c of the phase tuned population with the average activation S ?c . The features R ?c at different position-shifts are compared through a winner-take-all network to select the position-shift
?c* with the maximum R ?c . The disparity estimate is further refined by the peak location ?? ?c*
by
?? ?c*
d est = ?c* ? ----------------?
(8)
In additional to estimate the stimulus disparity, we also validate the estimates by comparing R ?c*
with a threshold T R . Instead of choosing a fixed threshold, we vary the threshold to show that the
feature R ?c can be an occlusion detector.
4.1 Disparity estimation with confidence
We applied the proposed model to estimate the disparity of the ?Cones? and the ?Teddy? stereograms, shown in Figure 5a. The spatial frequency and the spatial standard deviation of the neurons
left
right
(b)
(c)
Teddy
Cones
(a)
(d)
-100
estimate
error
(e)
0
estimate
100
error
Fig. 5: (a) The two natural stereograms used to evaluate the model performance. (b) The ground
truth disparity maps with respect to the left images, obtained by the structured light method. (c) The
ground truth occlusion maps. (d) The disparity maps and the error maps computed by the coarse-tofine approach. (e) The disparity maps and the error maps computed by the proposed model. The
detected invalid estimates are labelled in black in the disparity maps.
were kept the same as the previous analysis. We also performed spatial pooling and orientation
pooling to improve the estimation. For spatial pooling, we applied a circularly symmetric Gaussian
function with standard deviation ? . For orientation pooling, we pooled the responses over five orientations ranging from 30 to 150 degrees. The range of the position-shifts for the populations was
set to the largest disparity range, ? 128 pixels, according to the ground truth.
We also implemented the coarse-to-fine model as described in [4] for comparison. In this model, an
initial disparity estimate computed from a population of phase-tuned neurons at the coarsest scale is
successively refined by the populations of phase-tuned neurons at the finer scales. By choosing the
coarsest scale large enough, the disparity range covered by this method can be arbitrarily large. The
coarsest and the finest scales had the Gabor periods of 512 and 16 pixels. The Gabor periods of the
successive scales differed by a factor of 2 . Neurons at the finest scale had the same RF parameters as our model. Same spatial pooling and orientation pooling were applied on each scale.
Figure 5d-e show the estimated disparity maps and the error maps of the two approaches. The error
maps show the regions where the disparity estimates exceed 1 pixel of error in the disparity. Both
models correctly recover the stimulus disparity at most locations with gradual disparity changes,
but tend to make errors at the depth boundaries. However, the proposed model generates more
accurate estimates. In the coarse-to-fine model, the percentage of pixels being incorrectly estimated
is 36.3%, while our proposed model is only 27.8%.
The coarse-to-fine model tends to make errors around the depth boundaries. This arises because the
assumption that the stimulus disparity is constant over the RF of the neuron is unlikely at very large
scales. At boundaries, the coarse-to-fine model generates poor initial estimates, which cannot be
corrected at the finer scales, because the actual stimulus disparities are outside the range considered
at the finer scales.
On the other hand, the proposed model can not only estimate the stimulus disparity, but also can
validate the estimates. In general, the responses of neurons selective to different position disparities
are not comparable, since they depend upon image contrast which varies at different spatial locations. However, the feature R , which is computed by normalizing the response peak by the average
response, eliminates such dependency. Moreover, the invalid regions detected (the black regions on
the disparity maps) are in excellent agreement with the error labels.
4.2 Occlusion detection
In addition to validating the disparity estimates, the feature R can also be used to detect occlusion.
Occlusion is one of the challenging problems in stereo vision. Occlusion occurs near the depth discontinuities where there is no correspondence between the left and right images. The disparity in
the occlusion regions is undefined. The occlusion regions for these stereograms are shown in
Figure 5c.
There are three possibilities for image pixels that are labelled as out of range (DOUT). They are
occluded pixels, pixels with valid disparities that are incorrectly estimated, and pixels with valid
disparity that are correctly estimated. Figure 6a shows the percentages of DOUT pixels that fall
into each possibility as the threshold T R applied to R varies, e.g.,
# of occluded pixels in DOUT
P1 ( occluded ) = ------------------------------------------------------------------------ ? 100%
total # of pixels in DOUT
(9)
These percentages sum to unity for any thresholds T R . For small thresholds, the detector mainly
identifies the occlusion regions. As the threshold increases, the detector also begins to detect incorrect disparity estimates. Figure 6b shows the percentages of pixels in each possibility that are classified as DOUT as a function of T R , e.g.,
# of occluded pixels in DOUT
P2 ( occluded ) = ------------------------------------------------------------------------ ? 100%
# of occluded pixels in image
(10)
For a large threshold ( T R close to unity), all estimates are labelled as DOUT, so the three percentages approach 100%. The proposed detector is effective in identifying occlusion. At the threshold
T R = 0.3 , it identifies ~70% of the occluded pixels, ~20% of the pixels with incorrect estimates
with only ~10% misclassification.
(b)
1
0.6
0.4
0.2
0
1
0.8
0.8
P2 (x100%)
P1 (x100%)
(a)
0
0.2
0.4
TR
0.6
0.8
1
0.6
0.4
0.2
0
0
0.2
0.4
TR
0.6
0.8
1
Fig. 6: The percentages of occluded pixels (thick), pixels with incorrect disparity estimates (thin)
and pixels with correct estimates (dotted) identified as DOUT. (a) Percentages as a fraction of total
number of DOUT pixels. (b) Percentages as a fraction of number of pixels of each type.
5 Discussion
In this paper, we have proposed an algorithm to estimate stimulus disparities based on a confidence
measure computed from population of hybrid tuned disparity neurons. Although there have been
previously proposed models that estimate the stimulus disparity from populations of hybrid tuned
neurons [4][10], our model is the first that also provides a confidence measure for these estimates.
Our analysis suggests that pixels with low confidence are likely to be in occluded regions. The
detection of occlusion, an important problem in stereo vision, was not addressed in these previous
approaches.
The confidence measure used in the proposed algorithm can be computed using normalization,
which has been used to model the responses of V1 neurons [11]. Previous work has emphasized the
role of normalization in reducing the effect of image contrast or in ensuring that the neural
responses tuned to different stimulus dimensions are comparable [12]. Our results show that, in
addition to these roles, normalization also serves to make the magnitude of the neural responses
more representative of the confidence in validating the hypothesis that the input disparity is close to
the neurons preferred disparity. The classification performance using this normalized feature is
close to that using the statistical optimal boundaries.
Aggregating the neural responses over locations, orientations and scales is a common technique to
improve the estimation performance. For the consistency with the coarse-to-fine approach, our
algorithm also applies spatial and orientation pooling before computing the confidence. An interesting question, which we are now investigating, is whether individual confidence measures computed from different locations or orientations can be combined systematically.
Acknowledgements
This work was supported in part by the Hong Kong Research Grants Council under Grant 619205.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
H. B. Barlow, C. Blakemore, and J. D. Pettigrew. The neural mechanism of binocular depth discrimination. Journal of Neurophysiology, vol. 193(2), 327-342, 1967.
G. F. Poggio, B. C. Motter, S. Squatrito, and Y. Trotter. Responses of neurons in visual cortex (V1 and
V2) of the alert macaque to dynamic random-dot stereograms. Vision Research, vol. 25, 397-406,
1985.
I. Ohzawa, G. C. Deangelis, and R. D. Freeman. Stereoscopic depth discrimination in the visual cortex:
neurons ideally suited as disparity detectors. Science, vol. 249, 1037-1041, 1990.
Y. Chen and N. Qian. A Coarse-to-Fine Disparity Energy Model with Both Phase-Shift and PositionShift Receptive Field Mechanisms. Neural Computation, vol. 16, 1545-1577, 2004.
D. J. Fleet, H. Wagner and D. J. Heeger. Neural encoding of binocular disparity: energy models, position shifts and phase shifts. Vision Research, 1996, vol. 36, 1839-1857.
N. Qian, and Y. Zhu. Physiological computation of binocular disparity. Vision Research, vol. 37, 18111827, 1997.
S. J. D. Prince, B. G. Cumming, and A. J. Parker. Range and Mechanism of Encoding of Horizontal
Disparity in Macaque V1. Journal of Neurophysiology, vol. 87, 209-221, 2002.
D. Scharstein and R. Szeliski. A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms. International Journal of Computer Vision, vol. 47(1/2/3), 7-42, 2002.
D. Scharstein and R. Szeliski. High-accuracy stereo depth maps using structured light. IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 195-202, 2003.
J. C. A. Read and B. G. Cumming. Sensors for impossible stimuli may solve the stereo correspondence
problem. Nature Neuroscience, vol. 10, 1322-1328, 2007.
D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, vol. 9, 181198, 1992.
S. R. Lehky and T. J. Sejnowski. Neural model of stereoacuity and depth interpolation based on a distributed representation of stereo disparity. Journal of Neuroscience, vol. 10, 2281-2299, 1990.
| 3334 |@word neurophysiology:2 kong:5 trotter:1 gradual:1 tr:3 initial:2 disparity:113 efficacy:2 tuned:33 outperforms:1 imaginary:2 comparing:1 activation:2 dx:2 ust:2 finest:2 refines:1 discrimination:2 cue:1 half:2 stereoacuity:1 coarse:12 provides:1 location:18 successive:1 five:1 alert:1 constructed:1 incorrect:3 consists:1 inside:3 p1:2 brain:1 freeman:1 actual:1 considering:3 begin:1 estimating:1 notation:1 bounded:1 moreover:2 minimizes:1 classifier:2 control:2 grant:2 appear:1 positive:1 before:1 aggregating:1 tends:1 middlebury:1 encoding:2 analyzing:1 modulation:1 abuse:1 interpolation:1 black:2 suggests:1 challenging:1 co:2 blakemore:1 limited:2 range:29 camera:1 displacement:1 empirical:3 gabor:5 confidence:17 integrating:1 dout:23 suggest:1 cannot:2 close:3 applying:1 impossible:1 equivalent:2 map:12 shi:1 center:1 resolution:3 simplicity:1 identifying:1 qian:4 estimator:2 population:43 sar:2 us:3 distinguishing:1 hypothesis:1 agreement:1 recognition:1 located:1 mammalian:1 binning:1 observed:1 role:2 tsang:1 region:7 highest:1 stereograms:6 ideally:1 occluded:9 dynamic:1 engr:2 motivate:1 depend:1 upon:3 eric:1 completely:1 compactly:1 joint:6 x100:2 cat:1 univ:2 separated:1 describe:2 effective:2 sejnowski:1 deangelis:1 detected:2 outside:4 refined:2 choosing:2 pref:2 whose:1 larger:1 plausible:2 valued:1 solve:1 statistic:1 superscript:1 differentiate:1 propose:1 subtracting:1 validate:2 object:2 p2:2 implemented:1 indicate:1 differ:1 direction:2 concentrate:1 thick:1 correct:1 filter:1 centered:1 viewing:1 bin:1 explains:1 fix:1 im:2 mathematically:1 around:2 considered:1 ground:4 vary:2 estimation:7 label:1 council:1 largest:2 weighted:1 kowloon:2 gaussian:3 sensor:1 varying:1 derived:2 modelling:1 indicates:6 likelihood:1 mainly:1 hk:2 tech:2 contrast:2 detect:4 squaring:2 vl:2 entire:1 unlikely:1 selective:2 pixel:29 arg:1 among:1 orientation:9 classification:4 spatial:14 field:6 identical:1 represents:1 thin:1 stimulus:28 individual:2 phase:36 occlusion:13 detection:2 possibility:3 evaluation:1 light:2 undefined:1 accurate:1 edge:1 poggio:1 unless:1 re:2 prince:1 cover:2 deviation:10 examining:1 dependency:1 varies:2 combined:1 density:6 peak:18 international:1 picking:2 squared:1 pettigrew:1 successively:1 suggesting:1 sinusoidal:1 retinal:2 pooled:1 depends:1 performed:1 red:2 bayes:3 recover:1 minimize:1 accuracy:1 modelled:1 bayesian:2 basically:1 rectified:1 finer:5 classified:1 detector:5 energy:8 frequency:6 radian:1 appears:2 response:32 binocular:13 hand:1 horizontal:5 modulus:1 effect:1 ohzawa:1 normalized:1 barlow:1 din:20 entering:1 symmetric:1 read:1 illustrated:1 sin:1 hong:5 octave:1 presenting:1 demonstrate:2 image:19 ranging:1 common:3 empirically:1 winner:2 slight:1 interpretation:1 tuning:1 consistency:1 nonlinearity:1 jg:1 had:3 dot:1 cortex:4 longer:1 arbitrarily:1 additional:1 period:2 dashed:1 stereogram:1 characterized:1 cross:1 dept:2 ensuring:1 bertram:1 vision:7 represent:1 normalization:5 confined:1 cell:8 addition:2 fine:9 addressed:1 source:1 biased:1 eliminates:1 pooling:9 tend:1 validating:2 ee:2 near:1 exceed:1 enough:1 bandwidth:2 identified:1 reduce:1 shift:17 fleet:1 whether:2 ul:1 stereo:6 covered:3 ten:1 lehky:1 percentage:8 dotted:2 stereoscopic:1 estimated:6 neuroscience:3 per:1 correctly:2 blue:1 vol:12 express:1 motter:1 four:1 threshold:12 kept:1 v1:4 fraction:2 cone:4 sum:3 uncertainty:1 electronic:2 decision:10 comparable:2 followed:1 conjugation:1 distinguish:3 correspondence:4 encountered:1 occur:1 tofine:1 scene:2 flat:2 generates:3 coarsest:3 structured:2 according:2 poor:1 describes:1 across:1 ur:1 unity:2 biologically:2 making:1 intuitively:2 monocular:7 previously:2 mechanism:4 serf:1 v2:1 appropriate:1 alternative:1 denotes:1 graphical:1 question:1 occurs:1 receptive:6 strategy:2 striate:1 wrap:1 distance:1 separate:1 sci:2 ratio:3 difficult:1 unfortunately:1 taxonomy:1 negative:2 vertical:5 neuron:46 teddy:3 incorrectly:2 variability:1 frame:1 intensity:2 wraparound:1 pair:3 required:1 specified:2 discontinuity:1 macaque:2 address:1 suggested:1 pattern:1 rf:10 reliable:2 shifting:1 misclassification:1 natural:5 hybrid:4 indicator:2 zhu:1 improve:2 eye:3 inversely:1 identifies:2 axis:2 prior:2 acknowledgement:1 interesting:1 proportional:1 versus:2 validation:3 degree:1 thresholding:4 systematically:1 supported:1 keeping:1 allow:1 szeliski:2 fall:2 wagner:1 distributed:1 boundary:15 depth:10 dimension:1 valid:2 scharstein:2 ignore:1 preferred:16 investigating:1 summing:1 nature:1 excellent:1 complex:6 dense:1 linearly:1 arise:1 profile:2 quadrature:1 fig:6 representative:1 differed:1 parker:1 vr:3 cumming:2 position:12 heeger:2 lie:1 emphasized:1 physiological:1 normalizing:1 circularly:1 false:1 effectively:1 magnitude:1 illustrates:1 conditioned:1 chen:3 suited:1 likely:1 visual:4 applies:1 truth:4 determines:1 conditional:3 invalid:2 labelled:3 change:3 determined:1 except:1 corrected:1 reducing:1 hyperplane:1 total:6 est:3 select:2 formally:1 college:1 scan:1 arises:1 evaluate:1 eebert:1 |
2,575 | 3,335 | Testing for Homogeneity
with Kernel Fisher Discriminant Analysis
Za??d Harchaoui
LTCI, TELECOM ParisTech and CNRS
46, rue Barrault, 75634 Paris cedex 13, France
zaid.harchaoui@enst.fr
Francis Bach
Willow Project, INRIA-ENS
45, rue d?Ulm, 75230 Paris, France
francis.bach@mines.org
?
Eric
Moulines
LTCI, TELECOM ParisTech and CNRS
46, rue Barrault, 75634 Paris cedex 13, France
eric.moulines@enst.fr
Abstract
We propose to investigate test statistics for testing homogeneity based on kernel
Fisher discriminant analysis. Asymptotic null distributions under null hypothesis
are derived, and consistency against fixed alternatives is assessed. Finally, experimental evidence of the performance of the proposed approach on both artificial
and real datasets is provided.
1
Introduction
An important problem in statistics and machine learning consists in testing whether the distributions
of two random variables are identical under the alternative that they may differ in some ways. More
(1)
(1)
(2)
(2)
precisely, let {X1 , . . . , Xn1 } and {X1 , . . . , Xn2 } be independent random variables taking values in the input space (X, d), with common distributions P1 and P2 , respectively. The problem consists in testing the null hypothesis H0 : P1 = P2 against the alternative HA : P1 6= P2 . This problem
arises in many applications, ranging from computational anatomy [10] to process monitoring [7]. We
shall allow the input space X to be quite general, including for example finite-dimensional Euclidean
spaces or more sophisticated structures such as strings or graphs (see [17]) arising in applications
such as bioinformatics [4].
Traditional approaches to this problem are based on distribution functions and use a certain distance
between the empirical distributions obtained from the two samples. The most popular procedures
are the two-sample Kolmogorov-Smirnov tests or the Cramer-Von Mises tests, that have been the
standard for addressing these issues (at least when the dimension of the input space is small, and
most often when X = R). Although these tests are popular due to their simplicity, they are known
to be insensitive to certain characteristics of the distribution, such as densities containing highfrequency components or local features such as bumps. The low-power of the traditional density
based statistics can be improved on using test statistics based on kernel density estimators [2] and
[1] and wavelet estimators [6]. Recent work [11] has shown that one could difference in means in
RKHSs in order to consistently test for homogeneity. In this paper, we show that taking into account
the covariance structure in the RKHS allows to obtain simple limiting distributions.
The paper is organized as follows: in Section 2 and Section 3, we state the main definitions and we
construct the test statistics. In Section 4, we give the asymptotic distribution of our test statistic under
the null hypothesis, and investigate, the consistency and the power of the test for fixed alternatives. In
1
Section 5 we provide experimental evidence of the performance of our test statistic on both artificial
and real datasets. Detailed proofs are presented in the last sections.
2
Mean and covariance in reproducing kernel Hilbert spaces
We first highlight the main assumptions we make in the paper on the reproducing kernel, then introduce operator-theoretic tools for working with distributions in infinite-dimensional spaces.
2.1
Reproducing kernel Hilbert spaces
Let (X, d) be a separable metric space, and denote by X the associated ?-algebra. Let X be Xvalued random variable, with probability measure P; the corresponding expectation is denoted E.
Consider a Hilbert space (H, h?, ?iH ) of functions from X to R. The Hilbert space H is an RKHS if
at each x ? X, the point evaluation operator ?x : H ? R, which maps f ? H to f (x) ? R, is a
bounded linear functional. To each point x ? X, there corresponds an element ?(x) ? H (we call ?
the feature map) such that h?(x), f iH = f (x) for all f ? H, and h?(x), ?(y)iH = k(x, y), where
1/2
k : X ? X ? R is a positive definite kernel. We denote by kf kH = hf, f iH the associated norm.
It is assumed in the remainder that H is a separable Hilbert space. Note that this is always the case
if X is a separable metric space and if the kernel is continuous (see [18]). Throughout this paper, we
make the following two assumptions on the kernel:
(A1) The kernel k is bounded, that is |k|? = sup(x,y)?X?X k(x, y) < ?.
(A2) For all probability measures P on (X, X ), the RKHS associated with k(?, ?) is dense in
L2 (P).
The asymptotic normality of our test statistics is valid without assumption (A2), while consistency
results against fixed alternatives does need (A2). Assumption (A2) is true for translation-invariant
kernels [8], and in particular for the Gaussian kernel on Rd [18]. Note that we do not require the
compactness of X as in [18],
2.2
Mean element and covariance operator
We shall need some operator-theoretic tools to define mean elements and covariance operators in
RKHS. A linear operator T is said to be bounded if there is a number C such that kT f kH ? C kf kH
for all f ? H. The operator-norm of T is then defined as the infimum of such numbers C, that is
kT k = supkf kH ?1 kT f kH (see [9]).
We recall below
R some basic facts about first and second-order moments of RKHS-valued random
variables. If k 1/2 (x, x)P(dx) < ?, the mean element ?P is defined for all functions f ? H as the
unique element in H satisfying,
Z
def
h?P , f iH = Pf =
f dP .
(1)
R
If furthermore k(x, x)P(dx) < ?, then the covariance operator ?P is defined as the unique linear
operator onto H satisfying for all f, g ? H,
Z
def
hf, ?P giH = (f ? Pf )(g ? Pg)dP .
(2)
Note that when assumption (A2) is satisfied, then the map from P 7? ?P is injective. The operator
?P is a self-adjoint nonnegative trace-class operator. In the sequel, the dependence of ?P and ?P in
P is omitted whenever there is no risk of confusion.
Given a sample {X1 , . . . , Xn }, the empirical estimates respectively of the mean element and the
covariance operator are then defined using empirical moments and lead to:
?
? = n?1
n
X
i=1
k(Xi , ?) ,
? = n?1
?
n
X
i=1
2
k(Xi , ?) ? k(Xi , ?) ? ?
???
?.
(3)
The operator ? is a self-adjoint nonnegative trace-class operators. Hence, it can de diagonalized in
an orthonormal basis, with a spectrum composed of a strictly decreasing sequence
R ?p > 0 tending
to zero and potentially a null space N (?) composed of functions f in H such that {f ? Pf }2 dP =
0 [5], i.e., functions which are constant in the support of P.
The null space may be reduced to the null element (in particular for the Gaussian kernel), or may
be infinite-dimensional. Similarly, there may be infinitely many strictly positive eigenvalues (true
nonparametric case) or finitely many (underlying finite dimensional problems).
3
KFDA-based test statistic
In the feature space, the two-sample homogeneity test procedure can be formulated as follows. Given
(1)
(1)
(2)
(2)
{X1 , . . . , Xn1 } and {X1 , . . . , Xn2 } from distributions P1 and P2 , two independent identically
distributed samples respectively from P1 and P2 , having mean and covariance operators respectively
given by (?1 , ?1 ) and (?2 , ?2 ), we wish to test the null hypothesis H0 , ?1 = ?2 and ?1 = ?2 ,
against the alternative hypothesis HA , ?1 6= ?2 .
In this paper, we tackle the problem by using a (regularized) kernelized version of the Fisher disdef
criminant analysis. Denote by ?W = (n1 /n)?1 +(n2 /n)?2 the pooled covariance operator, where
def
n = n1 + n2 , corresponding to the within-class covariance matrix in the finite-dimensional setting
def
(see [14]. Let us denote ?B = (n1 n2 /n2 )(?2 ??1 )?(?2 ??1 ) the between-class covariance oper? a ) respectively the empirical estimates of the mean element and
ator. For a = 1, 2, denote by (?
?a , ?
? W def
? 1 + (n2 /n)?
?2
the covariance operator, defined as previously stated in (3). Denote ?
= (n1 /n)?
? B def
the empirical pooled covariance estimator, and ?
= (n1 n2 /n2 )(?
?2 ? ?
?1 ) ? (?
?2 ? ?
?1 ) the empirical between-class covariance operator. Let {?n }n?0 be a sequence of strictly positive numbers.
The maximum Fisher discriminant
D ratioEserves as a basis of our test statistics:
?Bf
f, ?
1
2
n1 n2
?
H E
n max D
=
(4)
(?W + ?n I)? 2 ??
,
f ?H
n
H
? W + ?n I)f
f, (?
H
where I denotes the identity operator. Note that if the input space is Euclidean, e.g. X = Rd , the
kernel is linear k(x, y) = x? y and ?n = 0, this quantity matches the so-called Hotelling?s T 2 statistic in the two-sample case [15]. Moreover, in practice it may be computed thanks to the kernel
trick, adapted to the kernel Fisher discriminant analysis and outlined in [17, Chapter 6]. We shall
make the following assumptions respectively on ?1 and ?2
P? 1/2
(B1) For u = 1, 2, the eigenvalues {?p (?u )}p?1 satisfy p=1 ?p (?u ) < ?.
(B2) For u = 1, 2, there are infinitely many strictly positive eigenvalues {?p (?u )}p?1 of ?u .
The statistical analysis conducted in Section 4 shall demonstrate, as ?n ? 0 at an appropriate
rate, the need to respectively recenter and rescale (a standard statistical transformation known as
studentization) the maximum Fisher discriminant ratio, in order to get a theoretically well-calibrated
test statistic. These roles, recentering and rescaling, will be played respectively by d1 (?W , ?) and
d2 (?W , ?), where for a given compact operator ? with decreasing eigenvalues ?p (S), the quantity
dr (?, ?) is defined for all q ? 1 as
(?
)1/r
X
def
?r r
dr (?, ?) =
(?p + ?) ?p
.
(5)
p=1
4
Theoretical results
We consider in the sequel the following studentized test statistic:
2
n1 n2
?
?1/2 ?
? W , ?n )
?
?
+
?
I)
? d1 (?
(
W
n
n
H
b
Tn (?n ) =
.
?
? W , ?n )
2d2 (?
3
(6)
In this paper, we first consider the asymptotic behavior of Tbn under the null hypothesis, and then
against a fixed alternative. This will establish that our nonparametric test procedure is consistent in
power.
4.1
Asymptotic normality under null hypothesis
In this section, we derive the distribution of the test statistics under the null hypothesis H0 : P1 = P2
of homogeneity, i.e. ?1 = ?2 and ?1 = ?2 = ?. As ?n ? 0 tends to zero,
Theorem 1. Assume (A1) and (B1). If P1 = P2 = P and if ?n + ?n?1 n?1/2 ? 0, then
D
Tbn (?n ) ?? N (0, 1)
(7)
The proof is postponed to Section 7. Under the assumptions of Theorem 1, the sequence of tests that
rejects the null hypothesis when T?n (?n ) ? z1?? , where z1?? is the (1 ? ?)-quantile of the standard
normal distribution, is asymptotically level ?. Note that the limiting distribution does not depend on
the kernel nor on the regularization parameter.
4.2
Power consistency
We study the power of the test based on Tbn (?n ) under alternative hypotheses. The minimal requirement is to to prove that this sequence of tests is consistent in power. A sequence of tests of
constant level ? is said to be consistent in power if the probability of accepting the null hypothesis
of homogeneity goes to zero as the sample size goes to infinity under a fixed alternative.
The following proposition shows that the limit is finite, strictly positive and independent of the kernel
otherwise (see [8] for similar
results
for canonical correlation analysis).
The following result
gives
?1/2
? ?1/2
some useful insights on
?W ?
, i.e.the population counterpart of
(?
+ ?n I)?1/2 ??
on
W
H
H
which our test statistics is based upon.
Proposition 2. Assume (A1) and (A2). If ?n +?n?1 n?1/2 ? 0, then for any probability distributions
P1 and P2 ,
Z
?1
Z
1
p1 p2
p1 p2
?1/2
2
1?
d?
d?
,
?W ?
=
?1 ?2
?1 p1 + ?2 p2
?1 p1 + ?2 p2
H
where ? is any probability measure such that P1 and P2 are absolutely continuous w.r.t. ? and p1
and p2 are the densities of P1 and P2 with respect to ?.
R
?1/2
2
2
The norm
?W ?
is finite when the ?2 -divergence p?1
1 (p2 ? p1 ) d? is finite. It is equal to
H
zero if the ?2 -divergence is null, that is, if and only if P1 = P2 .
By combining the two previous propositions, we therefore obtain the following consistency Theorem.
Theorem 3. Assume (A1) and (A2). Let P1 and P2 be two distributions over (X, X ), such that
P2 6= P1 . If ?n + ?n?1 n?1/2 ? 0, then
5
Experiments
PHA (Tbn (?) > z1?? ) ? ? .
(8)
In this section, we investigate the experimental performances of our test statistic KFDA, and compare it in terms of power against other nonparametric test statistics.
5.1
Artificial data
We shall focus here on a particularly simple setting, in order analyze the major issues arising in
applying our approach in practice. Indeed, we consider the periodic smoothing spline kernel (see
4
?=
KFDA
MMD
10?1
0.01?0.0032
0.01?0.0023
10?4
0.11?0.0062
id.
10?7
0.98?0.0031
id.
10?10
0.99?0.0001
id.
Table 1: Evolution of power of KFDA and MMD respectively, as ? goes to 0.
[19] for a detailed derivation), for which explicit formulae are available for the eigenvalues of the
corresponding covariance operator when the underlying distribution is uniform. This allows us to
alleviate the issue of estimating the spectrum of the covariance operator, and weigh up the practical
impact of the regularization on the power of our test statistic.
Periodic smoothing spline kernel Consider X as the two-dimensional circle identified with the
interval [0, 1] (with periodicity conditions). We consider the strictly positive sequence K? =
(2??)?2m and the following norm:
hf, c0 i2 X hf, c? i2 + hf, s? i2
kf k2H =
+
K0
K?
?>0
?
?
where c? (t) = 2 cos 2??t and s? (t) = 2 sin 2??t for ? ? 1 and c0 (t) = 1X . This is always an
RKHS norm associated with the following kernel
(?1)m?1
K(s, t) =
B2m ((s ? t) ? ?s ? t?)
(2m)!
where B2m is the 2m-th Bernoulli polynomial. We have B2 (x) = x2 ? x + 1/6.
We consider the following testing problem
H0 : p1 = p2
HA : p2 6= p2
with p1 the uniform density (i.e., the density with respect to the Lebesgue measure is equal to c0 ),
and densities p2 = p1 (c0 + .25 ? c4 ). The covariance operator ?(p1 ) has eigenvectors c0 , c? , s? with
eigenvalues 0 for c0 and K? for others.
Comparison with MMD We conducted experimental comparison in terms of power, for m = 2
and n = 104 and ? = 0.5. All quantities involving the eigenvalues of the covariance operator were
computed from their counterparts instead of being estimated. The sampling from pn2 was performed
by inverting the cumulative distribution function. The table below displays the results, averaged
over 10 Monte-Carlo runs.
5.2
Speaker verification
We conducted experiments in a speaker verification task [3], on a subset of 8 female speakers using
data from the NIST 2004 Speaker Recognition Evaluation. We refer the reader to [16] for instance
for details on the pre-processing of data. The figure shows averaged results over all couples of speakers. For each couple of speaker, at each run we took 3000 samples of each speaker and launched our
KFDA-test to decide whether samples come from the same speaker or not, and computed the type
II error by comparing the prediction to ground truth. We averaged the results for 100 runs for each
couple, and all couples of speaker. The level was set to ? = 0.05, since the empirical level seemed
to match the prescribed for this value of the level as we noticed in previous subsection. We performed the same experiments for the Maximum Mean Discrepancy and the Tajvidi-Hall test statistic
(TH, [13]). We summed up the results by plotting the ROC-curve for all competing methods. Our
method reaches good empirical power for a small value of the prescribed level (1 ? ? = 90% for
? = 0.05%). Maximum Mean Discrepancy also yields good empirical performance on this task.
6
Conclusion
We proposed a well-calibrated test statistic, built on kernel Fisher discriminant analysis, for which
we proved that the asymptotic limit distribution under null hypothesis is standard normal distribution. Our test statistic can be readily computed from Gram matrices once a kernel is defined, and
5
ROC Curve
1
Power
0.8
0.6
0.4
KFDA
MMD
TH
0.2
0
0
0.1
0.2
0.3
0.4
0.5
Level
Figure 1: Comparison of ROC curves in a speaker verification task
allows us to perform nonparametric hypothesis testing for homogeneity for high-dimensional data.
The KFDA-test statistic yields competitive performance for speaker identification.
7
Sketch of proof of asymptotic normality under null hypothesis
Outline. The proof of the asymptotic normality of the test statistics under null hypothesis follows
four steps. As a first step, we derive an asymptotic approximation of the test statistics as ?n +
? The test statistics is then spanned
?n?1 n?1/2 ? 0 , where the only remaining stochastic term is ?.
onto the eigenbasis of ?, and decomposed into two terms Bn and Cn . The second step allows to
prove the asymptotic negligibility of Bn , while the third step establishes the asymptotic normality
of Cn by a martingale central limit theorem (MCLT).
Step 1: Tbn (?n ) = T?n (?n ) + oP (1). First, we may prove, using perturbation results of covariance
operators, that, as ?n + ?n?1 n?1/2 ? 0 , we have
2
?1/2 ?
(n1 n2 /n)
(? + ?I)
?
? d1 (?, ?)
H
?
Tbn (?n ) =
+ oP (1) .
(9)
2d2 (?, ?)
For ease of notation, in the following, we shall often omit ? in quantities involving it. Hence, from
now on, ?p , ?q , d2,n stand for ?p (?), ?q (?), d2 (?, ?n ). Define
?
1/2
?
(1)
(1)
? n2
ep (Xi ) ? E[ep (X1 )]
1 ? i ? n1 ,
def
n1 n
Yn,p,i =
(10)
1/2
?
(2)
(2)
?? n1
ep (Xi?n1 ) ? E[ep (X1 )]
n1 + 1 ? i ? n .
n2 n
We now give formulas for the moments of {Yn,p,i }1?i?n,p?1 , often used in the proof. Straightforward calculations give
n
X
1/2
E[Yn,p,i Yn,q,i ] = ?1/2
(11)
p ?q ?p,q ,
i=1
while the Cauchy-Schwarz inequality and the reproducing property give
def
Denote Sn,p =
with
def
An =
2
2
1/2
Cov(Yn,p,i
, Yn,q,i
) ? Cn?2 |k|? ?1/2
.
p ?q
(12)
?
?1
?
An
i=1 Yn,p,i . Using Eq. (11), our test statistics now writes as Tn = ( 2d2,n )
Pn
?
2
X
n1 n2
?1 2
?1/2 ?
2
(?
+
?
I)
?
?
d
=
(?p + ?n )
Sn,p ? ESn,p
= Bn + 2Cn .
n
1,n
n
p=1
(13)
6
where Bn and Cn are defined as follows
? X
n
X
2
def
2
Bn =
Yn,p,i ? EYn,p,i
,
p=1 i=1
def
Cn =
?
X
(?p + ?n )
p=1
n
X
?1
Yn,p,i
i=1
(14)
?
i?1
?X
?
Yn,p,j
j=1
?
?
?
.
(15)
Step 2: Bn = oP (1). The proof consists in computing the variance
of this term. Since the variables
Pn
Yn,p,i and Yn,q,j are independent if i 6= j, then Var(Bn ) = i=1 vn,i , where
!
?
X
def
?1
2
2
vn,i = Var
(?p + ?n ) {Yn,p,i ? E[Yn,p,i ]}
p=1
?
X
=
2
2
(?p + ?n )?1 (?q + ?n )?1 Cov(Yn,p,i
, Yn,q,i
).
p,q=1
Using Eq. (12), we get
Pn
i=1
vn,i ? Cn?1 ?n?2
negligible, since by assumption we have
P
?n?1 n?1/2
?
p=1
1/2
?p
? 0 and
D
2
P?
where the RHS above is indeed
1/2
p=1
?p
< ?.
Step 3: d?1
2,n Cn ?? N(0, 1/2). We use the central limit theorem (MCLT) for triangular arrays of
martingale differences (see e.g. [12, Theorem 3.2]). For = 1, . . . , n, denote
def
?n,i =
d?1
2,n
?
X
?1
(?p + ?n )
Yn,p,i Mn,p,i?1 ,
where
def
Mn,p,i =
p=1
i
X
Yn,p,j ,
(16)
j=1
and let Fn,i = ? (Yn,p,j , p ? {1, . . . , n}, j ? {0, . . . , i}). Note that, by construction, ?n,i is a martingale increment, i.e. E [ ?n,i | Fn,i?1 ] = 0. The first step in the proof of the CLT is to establish
that
n
X
2
P
Fn,i?1 ??
s2n =
E ?n,i
1/2 .
(17)
i=1
The second step of the proof is to establish the negligibility condition. We use [12, Theorem
P
2
3.2], which requires to establish that max1?i?n |?n,i | ?? 0 (smallness) and E(max1?i?n ?n,i
)
is bounded in n (tightness), where ?n,i is defined in (16). We will establish the two conditions
simultaneously by checking that
2
E max ?n,i
= o(1) .
(18)
1?i?n
s2n ,
between diagonal terms Dn , and off-diagonal terms En , we have
?
n
X
X
2
2
Dn = d?2
(?p + ?n )?2
Mn,p,i?1
E[Yn,p,i
],
2,n
(19)
En = d?2
2,n
(20)
Splitting the sum
p=1
X
i=1
(?p + ?n )?1 (?q + ?n )?1
Consider first the diagonal terms En .
Pi
2
j=1 E[Yn,p,j ]. Using Eq. (11) we get
p=1
(?p + ?n )?2
n X
i?1
X
Mn,p,i?1 Mn,q,i?1 E[Yn,p,i Yn,q,i ] .
i=1
p6=q
?
X
n
X
We first compute its mean.
2
Note that E[Mn,p,i
] =
2
2
E[Yn,p,j
]E[Yn,p,i
]
i=1 j=1
?"
?
#2
n
n
?
? X
? 1
X
1X
2
2
(?p + ?n )?2
E[Yn,p,i
] ?
E2 [Yn,p,i
] = d22,n 1 + O(n?1 ) .
=
?
? 2
2 p=1
i=1
i=1
7
Therefore, E[Dn ] = 1/2 + o(1). Next, we may prove that Dn ? E[Dn ] = oP (1) is negligible, by
checking that Var[Dn ] = o(1). We finally consider En defined in (20), and prove that En = oP (1)
using Eq. (11). This concludes the proof of Eq. (17).
1/2
We finally show Eq. (18). Since |Yn,p,i | ? n?1/2 |k|? P-a.s we may bound
?1/2
max |?n,i | ? Cd?1
2,n n
1?i?n
?
X
p=1
(?p + ?n )?1 max |Mn,p,i?1 | .
1?i?n
(21)
1/2
2
Then, the Doob inequality implies that E1/2 [max1?i?n |Mn,p,i?1 |2 ] ? E1/2 [Mn,p,n?1
] ? C?p
Plugging this bound in (21), the Minkowski inequality
(
)
?
X
?1 ?1 ?1/2
1/2
2
1/2
E
max ?n,i ? C d2,n ?n n
?p
,
1?i?n
.
p=1
and the proof is concluded using the fact that ?n + ?n?1 n?1/2 ? 0 and Assumption (B1).
References
[1] D. L. Allen. Hypothesis testing using an L1 -distance bootstrap. The American Statistician, 51(2):145?
150, 1997.
[2] N. H. Anderson, P. Hall, and D. M. Titterington. Two-sample test statistics for measuring discrepancies
between two multivariate probability density functions using kernel-based density estimates. Journal of
Multivariate Analysis, 50(1):41?54, 1994.
[3] F. Bimbot, J.-F. Bonastre, C. Fredouille, G. Gravier, I. Magrin-Chagnolleau, S. Meignier, T. Merlin,
J. Ortega-Garcia, D. Petrovska-Delacretaz, and D. A. Reynolds. A tutorial on text-independent speaker
verification. EURASIP, 4:430?51, 2004.
[4] K. Borgwardt, A. Gretton, M. Rasch, H.-P. Kriegel, Sch?olkopf, and A. J. Smola. Integrating structured
biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):49?57, 2006.
[5] H. Brezis. Analyse Fonctionnelle. Masson, 1980.
[6] C. Butucea and K. Tribouley. Nonparametric homogeneity tests. Journal of Statistical Planning and
Inference, 136(3):597?639, 2006.
[7] E. Carlstein, H. M?uller, and D. Siegmund, editors. Change-point Problems, number 23 in IMS Monograph. Institute of Mathematical Statistics, Hayward, CA, 1994.
[8] K. Fukumizu, A. Gretton, X. Sunn, and B. Sch?olkopf. Kernel measures of conditional dependence. In
Adv. NIPS, 2008.
[9] I. Gohberg, S. Goldberg, and M. A. Kaashoek. Classes of Linear Operators Vol. I. Birkh?auser, 1990.
[10] U. Grenander and M. Miller. Pattern Theory: from representation to inference. Oxford Univ. Press, 2007.
[11] A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola. A kernel method for the two-sample
problem. In Adv. NIPS, 2006.
[12] P. Hall and C. Heyde. Martingale Limit Theory and Its Application. Academic Press, 1980.
[13] P. Hall and N. Tajvidi. Permutation tests for equality of distributions in high-dimensional settings.
Biometrika, 89(2):359?374, 2002.
[14] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Series in
Statistics. Springer, 2001.
[15] E. Lehmann and J. Romano. Testing Statistical Hypotheses (3rd ed.). Springer, 2005.
[16] J. Louradour, K. Daoudi, and F. Bach. Feature space mahalanobis sequence kernels: Application to svm
speaker verification. IEEE Transactions on Audio, Speech and Language Processing, 2007. To appear.
[17] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004.
[18] I. Steinwart, D. Hush, and C. Scovel. An explicit description of the reproducing kernel hilbert spaces of
gaussian RBF kernels. IEEE Transactions on Information Theory, 52:4635?4643, 2006.
[19] G. Wahba. Spline Models for Observational Data. SIAM, 1990.
8
| 3335 |@word version:1 polynomial:1 norm:5 smirnov:1 bf:1 c0:6 d2:7 bn:7 covariance:18 pg:1 moment:3 series:1 rkhs:6 reynolds:1 diagonalized:1 scovel:1 comparing:1 dx:2 readily:1 fn:3 zaid:1 accepting:1 barrault:2 org:1 mathematical:1 dn:6 consists:3 prove:5 introduce:1 theoretically:1 indeed:2 behavior:1 p1:23 nor:1 planning:1 moulines:2 decreasing:2 decomposed:1 eyn:1 pf:3 project:1 provided:1 bounded:4 underlying:2 moreover:1 estimating:1 notation:1 null:17 hayward:1 string:1 titterington:1 transformation:1 tackle:1 biometrika:1 omit:1 yn:28 appear:1 positive:6 negligible:2 local:1 tends:1 limit:5 supkf:1 id:3 oxford:1 inria:1 co:1 ease:1 averaged:3 unique:2 practical:1 testing:8 practice:2 definite:1 writes:1 bootstrap:1 procedure:3 empirical:9 reject:1 pre:1 integrating:1 get:3 onto:2 operator:26 risk:1 applying:1 map:3 pn2:1 go:3 straightforward:1 tbn:6 masson:1 d22:1 simplicity:1 splitting:1 estimator:3 insight:1 array:1 orthonormal:1 spanned:1 studentization:1 population:1 siegmund:1 increment:1 limiting:2 construction:1 ulm:1 goldberg:1 hypothesis:17 trick:1 element:9 satisfying:2 particularly:1 recognition:1 ep:4 role:1 schoelkopf:1 adv:2 weigh:1 monograph:1 cristianini:1 mine:1 depend:1 algebra:1 upon:1 max1:3 eric:2 basis:2 k0:1 chapter:1 kolmogorov:1 derivation:1 univ:2 monte:1 birkh:1 artificial:3 h0:4 quite:1 valued:1 tightness:1 otherwise:1 triangular:1 statistic:29 cov:2 analyse:1 sequence:7 eigenvalue:7 grenander:1 took:1 propose:1 fr:2 remainder:1 combining:1 adjoint:2 description:1 kh:5 olkopf:2 eigenbasis:1 requirement:1 derive:2 rescale:1 finitely:1 op:5 eq:6 p2:23 come:1 implies:1 differ:1 rasch:2 anatomy:1 stochastic:1 observational:1 require:1 alleviate:1 proposition:3 biological:1 strictly:6 cramer:1 ground:1 normal:2 hall:4 k2h:1 bump:1 b2m:2 major:1 a2:7 omitted:1 schwarz:1 establishes:1 tool:2 uller:1 fukumizu:1 always:2 gaussian:3 pn:3 derived:1 focus:1 consistently:1 bernoulli:1 inference:2 cnrs:2 compactness:1 kernelized:1 willow:1 france:3 doob:1 issue:3 denoted:1 smoothing:2 summed:1 auser:1 equal:2 construct:1 once:1 having:1 sampling:1 identical:1 discrepancy:4 others:1 spline:3 composed:2 simultaneously:1 divergence:2 homogeneity:8 lebesgue:1 statistician:1 n1:14 friedman:1 ltci:2 investigate:3 evaluation:2 kt:3 injective:1 euclidean:2 taylor:1 circle:1 theoretical:1 minimal:1 instance:1 measuring:1 addressing:1 subset:1 uniform:2 conducted:3 periodic:2 calibrated:2 sunn:1 thanks:1 density:9 borgwardt:2 siam:1 sequel:2 off:1 von:1 central:2 satisfied:1 containing:1 dr:2 american:1 rescaling:1 oper:1 account:1 de:1 pooled:2 b2:2 satisfy:1 performed:2 analyze:1 francis:2 sup:1 competitive:1 hf:5 variance:1 characteristic:1 miller:1 yield:2 identification:1 carlo:1 monitoring:1 za:1 reach:1 whenever:1 ed:1 definition:1 against:6 esn:1 e2:1 proof:10 mi:1 associated:4 xn1:2 couple:4 proved:1 popular:2 recall:1 subsection:1 organized:1 hilbert:6 sophisticated:1 improved:1 fonctionnelle:1 anderson:1 furthermore:1 smola:2 p6:1 correlation:1 working:1 sketch:1 steinwart:1 infimum:1 true:2 counterpart:2 evolution:1 hence:2 regularization:2 equality:1 i2:3 mahalanobis:1 sin:1 self:2 speaker:13 ortega:1 outline:1 theoretic:2 demonstrate:1 confusion:1 tn:2 allen:1 l1:1 ranging:1 common:1 tending:1 functional:1 insensitive:1 ims:1 refer:1 cambridge:1 rd:3 consistency:5 outlined:1 similarly:1 language:1 shawe:1 multivariate:2 recent:1 female:1 certain:2 inequality:3 postponed:1 merlin:1 clt:1 ii:1 xn2:2 harchaoui:2 gretton:3 match:2 academic:1 calculation:1 bach:3 e1:2 a1:4 plugging:1 studentized:1 impact:1 involving:2 basic:1 prediction:1 metric:2 expectation:1 kernel:31 mmd:4 interval:1 concluded:1 sch:2 launched:1 cedex:2 call:1 identically:1 hastie:1 identified:1 competing:1 wahba:1 cn:8 whether:2 speech:1 romano:1 useful:1 detailed:2 eigenvectors:1 nonparametric:5 gih:1 reduced:1 canonical:1 tutorial:1 estimated:1 arising:2 tibshirani:1 shall:6 vol:1 four:1 bimbot:1 graph:1 asymptotically:1 sum:1 run:3 lehmann:1 throughout:1 reader:1 decide:1 vn:3 def:15 bound:2 played:1 display:1 nonnegative:2 adapted:1 precisely:1 infinity:1 x2:1 prescribed:2 minkowski:1 separable:3 structured:1 pha:1 enst:2 invariant:1 previously:1 available:1 appropriate:1 s2n:2 hotelling:1 alternative:9 rkhss:1 denotes:1 remaining:1 quantile:1 establish:5 noticed:1 quantity:4 dependence:2 traditional:2 highfrequency:1 said:2 diagonal:3 dp:3 distance:2 cauchy:1 discriminant:6 kfda:7 ratio:1 potentially:1 trace:2 stated:1 perform:1 datasets:2 finite:6 nist:1 perturbation:1 reproducing:5 inverting:1 paris:3 z1:3 c4:1 hush:1 nip:2 kriegel:1 below:2 pattern:2 built:1 including:1 max:5 power:13 regularized:1 ator:1 mn:9 normality:5 smallness:1 concludes:1 sn:2 text:1 l2:1 checking:2 kf:3 asymptotic:11 highlight:1 permutation:1 var:3 verification:5 consistent:3 plotting:1 editor:1 pi:1 cd:1 translation:1 periodicity:1 last:1 allow:1 institute:1 taking:2 recentering:1 distributed:1 curve:3 dimension:1 xn:1 valid:1 cumulative:1 gram:1 seemed:1 stand:1 transaction:2 compact:1 b1:3 assumed:1 xi:5 spectrum:2 continuous:2 table:2 ca:1 rue:3 louradour:1 main:2 dense:1 rh:1 n2:13 x1:7 telecom:2 en:6 roc:3 martingale:4 wish:1 explicit:2 third:1 wavelet:1 tajvidi:2 theorem:8 formula:2 svm:1 evidence:2 ih:5 recenter:1 garcia:1 infinitely:2 springer:3 corresponds:1 truth:1 conditional:1 identity:1 formulated:1 rbf:1 kaashoek:1 fisher:7 paristech:2 eurasip:1 change:1 infinite:2 called:1 experimental:4 support:1 arises:1 assessed:1 bioinformatics:2 absolutely:1 audio:1 d1:3 |
2,576 | 3,336 | Near-Maximum Entropy Models for Binary
Neural Representations of Natural Images
Matthias Bethge and Philipp Berens
Max Planck Institute for Biological Cybernetics
Spemannstrasse 41, 72076, T?ubingen, Germany
mbethge,berens@tuebingen.mpg.de
Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these
approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new
approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data?the model parameters can be derived
in closed form and sampling is easy. Therefore, our NearMaxEnt approach can
serve as a tool for testing predictions from a pairwise maximum entropy model not
only for low-dimensional marginals, but also for high dimensional measurements
of more than thousand units. We demonstrate its usefulness by studying natural
images with dichotomized pixel intensities. Our results indicate that the statistics
of such higher-dimensional measurements exhibit additional structure that are not
predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics surprisingly well up to the limit of
dimensionality where estimation of the full joint distribution is feasible.
1
Introduction
A core issue in sensory coding is to seek out and model statistical regularities in high-dimensional
data. In particular, motivated by developments in information theory, it has been hypothesized
that modeling these regularities by means of redundancy reduction constitutes an important goal of
early visual processing [2]. Recent studies conjectured that the binary spike responses of retinal
ganglion cells may be characterized completely in terms of second-order correlations when using
a maximum entropy approach [13, 12]. In light of what we know about the statistics of the visual
input, however, this would be very surprising: Natural images are known to exhibit complex higherorder correlations which are extremely difficult to model yet being perceptually relevant. Thus, if
we assume that retinal ganglion cells do not discard the information underlying these higher-order
correlations altogether, it would be a very difficult signal processing task to remove all of those
already within the retinal network.
Oftentimes, neurons involved in early visual processing are modeled as rather simple computational
units akin to generalized linear models, where a linear filter is followed by a point-wise nonlinearity.
For such simple neuron models, the possibility of removing higher-order correlations present in the
input is very limited [3].
Here, we study the role of second-order correlations in the multivariate binary output statistics of
such linear-nonlinear model neurons with a threshold nonlinearity responding to natural images.
That is, each unit can be described by an affine transformation zk = wkT x + ? followed by a
point-wise signum function sk = sgn(zk ). Our interest in this model is twofold: (A) It can be
regarded a parsimonious model for the analysis of population codes of natural images for which the
1
A
?3
B
3
0
2
C
6
JS?Divergence (bits)
?H (%)
x 10
4
6
8
Dimension
1
log? H (%)
?H (%)
4
3
2
1
2
4
6
8
Dimension
10
D
10
0.5
0
?5
5
0
10
x 10
12
14
16
18
0
10
?1
10
?2
20
10
Dimension
12
14
16
18
20
log 2 (Number of Samples)
Figure 1: Similarity between the Ising and the DG model. A+C: Entropy difference ?H between the Ising
model and the Dichotomized Gaussian distribution as a function of dimensionality. A: Up to 10 dimensions
we can compute HDG directly by evaluating Eq. 6. Gray dots correspond to different sets of parameters. For
m ? 4, the relatively large scatter and the existence of negative values is due to the limited numerical precision
of the Monte-Carlo integration. Errorbars show standard error of the mean. B. JS-divergence DJS between PI
and PDG . C. ?H as above, for higher dimensions. Up to 20 dimensions ?H remains very small. The increase
for m ? 20 is most likely due to undersampling of the distributions. D. ?H as function of sample size used
to estimate HDG , at seven (black) and ten (grey) dimensions (note log scale on both axes). ?H decreases with
a power law with increasing sample sizes.
computational power and the bandwidth of each unit is limited. (B) The same model can also be
used more generally to fit multivariate binary data with given pairwise correlations, if x is drawn
from a Gaussian distribution. In particular, we will show that the resulting distribution closely
resembles the binary maximum entropy models known as Ising models or Boltzmann machines
which have recently become popular for the analysis of spike train recordings from retinal ganglion
cell responses [13, 12].
Motivated by the analysis in [12, 13] and the discussion in [10] we are interested at a more general level in the following questions: are pairwise interactions enough for understanding the statistical regularities in high-dimensional natural data (given that they provide a good fit in the lowdimensional case)? If we suppose that pairwise interactions are enough, what can we say about the
amount of redundancies in high-dimensional data? In comparison with neural spike data, natural
images provide two advantages for studying these questions: 1) It is much easier to obtain large
amounts of data with millions of samples which are less prone to nonstationarities. 2) Often differences in the higher-order statistics such as between pink noise and natural images can be recognized
by eye.
2
Second order models for binary variables
In order to study whether pairwise interactions are enough to determine the statistical regularities
in high-dimensional data, it is necessary to be able to compute the maximum entropy distribution
for large number of dimensions N . Given a set of measured statistics, maximum entropy models
yield a full probability distribution that is consistent with these constraints but does not impose any
2
0.05
0
0
?4
1
?
2
1
2
x 10
2
1
0
0
?
Figure 2: Examples of covariance matrices (A+B.) and their learned approximations (C+D) at m = 10 for
clarity. ? is the parameter controlling the steepness of correlation decrease. E+F. Eigenvalue spectra of both
matrices. G. Entropy difference ?H and H. JS-divergence between the distribution of samples obtained from
the two models at m = 7.
additional structure on the distribution [7]. For binary data with given mean activations ?i = hsi i
and correlations between neurons ?ij = hsi sj i ? hsi ihsj i, one obtains a quadratic exponential
probability mass function known as the Ising model in physics or as the Boltzmann machine in
machine learning.
Currently all methods used to determine the parameters of such binary maximum entropy models
suffer from the same drawback: since the parameters do not correspond directly to any of the measured statistics, they have to be inferred (or ?learned?) from data. In high dimensions though, this
poses a difficult computational problem. Therefore the characterization of complete neural circuits
with possibly hundreds of neurons is still out of reach, even though analysis was recently extended
to up to forty neurons [14].
To make the maximum entropy approach feasible in high dimensions, we propose a new strategy:
Sampling from a ?near-maximum? entropy model that does not require any complicated learning
of parameters. In order to justify this approach, we verify empirically that the entropy of the full
probability distributions obtained with the near-maximum entropy model are indistinguishable from
those obtained with classical methods such as Gibbs sampling for up to 20 dimensions.
2.1
Boltzmann machine learning
For a binary vector of neural activities s ? {?1, 1}m and specified ?i and ?ij the Ising model takes
the form
?
?
m
X
X
1
1
PI (s) = exp ?
hi si +
Jij si sj ? ,
(1)
Z
2
i=1
i6=j
where the local fields hi and the couplings Jij have to be chosen such that hsi i = ?i and hsi sj i ?
hsi ihsj i = ?ij . Unfortunately, finding the correct parameters turns out to be a difficult problem
which cannot be solved in closed form.
Therefore, one has to resort to an optimization approach to learn the model parameters hi and Jij
from data. This problem is called Boltzmann machine learning and is based on maximization of the
log-likelihood L = ln PI ({si }N
i=1 |h, J) [1] where N is the number of samples. The gradient of the
likelihood can be computed in terms of the empirical covariance and the covariance of si and sj as
produced by the current model:
?L
= hsi sj iData ? hsi sj iModel
?Jij
(2)
The second term on the right hand side is difficult to compute, as it requires sampling from the model.
Since the partition function Z in Eq. (1) is not available in closed form, Monte-Carlo methods such
3
Figure 3: Random samples of dichotomized 4x4 patches from the van Hateren image data base (left) and from
the corresponding dichotomized Gaussian distribution with equal covariance matrix (middle). It is not possible
to see any systematic difference between the samples from the two distributions. For comparison, this is not so
for the sample from the independent model (right).
as Gibbs sampling are employed [9] in order to approximate the required model average. This is
computationally demanding as sampling is necessary for each individual update. While efficient
sampling algorithms exist for special cases [6], it still remains a hard and time consuming problem
in the general case. Additionally, most sampling algorithms do not come with guarantees for the
quality of the approximation of the required average. In conclusion, parameter fitting of the Ising
model is slow and oftentimes painstaking, especially in high dimensions.
2.2
Modeling with the dichotomized Gaussian
Here we explore an intriguing alternative to the Monte-Carlo approach: We replace the Ising model
by a ?near-maximum? entropy model, for which both parameter computation and sampling is easy. A
very convenient, but in this context rarely recognized, candidate model is the dichotomized Gaussian
distribution (DG) [11, 5, 4]. It is obtained by supposing that the observed binary vector s is generated
from a hidden Gaussian variable
z ? N (?, ?) ,
si = sgn(zi ).
(3)
Without loss of generality, we can assume unit variances for the Gaussian, i.e. ?ii = 1, the mean ?
and the covariance matrix ? of s are given by
?i = 2?(?i ) ? 1 ,
?ii = 4?(?i )?(??i ) ,
?ij = 4?(?i , ?j , ?ij ) for i 6= j
(4)
where ?(x, y, ?) = ?2 (x, y, ?) ? ?(x)?(y) . Here ? is the univariate standardized cumulative
Gaussian distribution and ?2 its bivariate counterpart. While the computation of the model parameters was hard for the Ising model, these equations can be easily inverted to find the parameters of
the hidden Gaussian distribution:
?i + 1
?i = ??1
(5)
2
Determining ?ij generally requires to find a suitable value such that ?ij ? 4?(?i , ?j , ?ij ) = 0.
This can be efficently solved by numerical computations, since the function is monotonic in ?ij
and has a unique zero crossing. We obtain an especially easy case, when ?i = ?j = 0, as then
?ij = sin ?2 ?ij .
It is also possible to evaluate the probability mass function of the DG model by numerical integration,
Z b1
Z bm
1
T ?1
PDG (s) =
.
.
.
exp
?(s
?
?)
?
(s
?
?)
,
(6)
(2?)N/2 |?|1/2 a1
am
where the integration limits are chosen as ai = 0 and bi = ?, if si = 1, and ai = ?? and bi = 0,
otherwise.
In summary, the proposed model has two advantages over the traditional Ising model: (1) Sampling
is easy, and (2) finding the model parameters is easy too.
4
3
Near-maximum entropy behavior of the dichotomized Gaussian
distribution
In the previous section we introduced the dichotomized Gaussian distribution. Our conjecture is that
in many cases it can serve as a convenient approximation to the Ising model. Now, we investigate
how good this approximation is. For a wide range of interaction terms and mean activations we
verify that the DG model closely resembles the Ising model. In particular we show that the entropy of
the DG distribution is not smaller than the entropy of the Ising model even at rather high dimensions.
3.1
Random Connectivity
We created randomly connected networks of varying size m, where mean activations hi and
interactions
terms Jij were drawn from N (0, 0.4). First, we compared the entropy HI =
P
? s PI (s) log2 PI (s) of the thus specified Ising model obtained by evaluating Eq. 1 with the entropy of the DG distribution HDG computed by numerical integration1 from Eq. 6 (twenty parameter
sets). The entropy difference ?H = HI ? HDG was smaller than 0.002 percent of HI (Fig. 1 A,
note scale) and probably within the range of the numerical integration accuracy. In addition, we
computed the Jensen-Shannon divergence DJS [PI kPDG ] = 12 (DKL [PI kM ] + DKL [PDG kM ]),
where M = 21 (PI + PDG ) [8]. We find that DJS [PI kPDG ] is extremly small up to 10 dimensions
(Fig. 1 B). Therefore, the distributions seem to be not only close in their respective entropy, but also
to have a very similar structure.
Next, we extended this analysis to networks of larger size and repeated the same analysis for up to
twenty dimensions. Since the integration in Eq. 6 becomes too time-consuming for m ? 20 due
to the large number of states, we used a histogram based estimate of PDG (using 3 ? 106 samples
for m < 15 and 15 ? 106 samples for m ? 15). The estimate of ?H is still very small at high
dimensions (Fig. 1 C, below 0.5%). We also computed DJS , which scaled similarly to ?H (data
not shown).
In Fig. 1 C, ?H seems to increase with dimensionality. Therefore, we investigated how the estimate
of ?H is influenced by the number of samples used. We computed both quantities for varying numbers of samples from the DG distribution (for m = 7, 10). As ?H decreases according to a power
law with increasing m, the rise of ?H observed in Fig. 1 C is most likely due to undersampling of
the distribution.
3.2
Specified covariance structure
To explore the relationship between the two techniques more systematically, we generated covariance matrices with varying eigenvalue spectra. We used a parametric Toeplitz form, where the nth
diagonal is set to a constant value exp(?? ? n) (Fig. 2A and B, m = 7, 10). We varied the decay
parameter ?, which led to a widely varying covariance structure (For eigenvalue spectra, see Fig. 2E
and F). We fit the Ising models using the Boltzmann machine gradient descent procedure. The covariance matrix of the samples drawn from the Ising model resembles the original very closely (Fig.
2C and D). We also computed the entropy of the DG model using the desired covariance structure.
We estimated ?H and DJS [PG kPDG ] averaged over 10 trials with 105 samples obtained by Gibbs
sampling from the Ising model. ?H is very close to zero (Fig. 2G, m = 7) except for small ?s
and never exceeded 0.05%. Moreover, the structure of both distributions seems to be very similar as
well (Fig. 2H, m = 7). At m = 10, both quantities scaled qualitatively similair (data not shown).
We also repeated this analysis using equations 1 and 6 as before, which lead to similar results (data
not shown).
Our experiments demonstrate clearly that the dichotomized Gaussian distribution constitutes a good
approximation to the quadratic exponential distribution for a large parameter range. In the following
section, we will exploit the similarity between the two models to study how the role of second-order
correlations may change between low-dimensional and high-dimensional statistics in case of natural
images.
1
For integration, we used the mvncdf function of Matlab. For m ? 4 this function employs Monte-Carlo
integration.
5
Figure 4: A: Negative log probabilities of the DG model are plotted against ground truth (red dots). Identical
distributions fall on the diagonal. Data points outside the area enclosed by the dashed lines indicate significant
differences between the model and ground truth. The DG model matches the true distribution very well. For
comparison the independent model is shown as well (blue crosses). B: The multi-information of the true
distribution (blue dots) accurately agrees with the multi-information of the DG model (red line). Similar to
the analysis in [12], we observe a power law behavior of the entropy of the independent model (black solid
line) and the mutli-information. Linear extrapolation (in the log-log plot) to higher dimensions is indicated by
dashed lines. C: Different way of presentation of the same data as in B: the joint entropy H = Hindep ? I
(blue dots) is plotted instead of I and the axis are in linear scale. The dashed red line represents the same
extrapolation as in B.
4
Natural images: Second order and beyond
We now investigate to which extent the statistics of natural images with dichotomized pixel intensities can be characterized by pairwise correlations only. In particular, we would like to know how
the role of pairwise correlations opposed to higher-order correlations changes depending on the dimensionality. Thanks to the DG model introduced above, we are in the position to study the effect
of pairwise correlations for high-dimensional binary random variables (N ? 1000 or even larger).
We use the van Hateren image database in log-intensity scale, from which we sample small image
patches at random positions. The threshold for the dichotomization is set to the median of pixel
intensities. That is, each binary variable encodes whether the corresponding pixel intensity is above
or below the median over the ensemble. Up to patch sizes of 4 ? 4 pixel, the true joint statistics can
be assessed using nonparametric histogram methods. Before we present quantitative comparisons, it
is instructive to look at random samples from the true distribution (Fig. 3, left), from the DG model
with same mean and covariance (Fig. 3, middle), and from the corresponding independent model
(Fig. 3, right). By visual inspection, it seems that the DG model fits the true distribution well.
In order to quantify how well the DG model matches the true distribution, we draw two independent
sets of samples from each (N = 2 ? 106 for each set) and generate a scatter plot as shown in
Fig. 4 A for 4 ? 4 image patches. Each dot corresponds to one of the 216 = 65536 possible different
binary patterns. The relative frequencies of these patterns according to the DG model (red dots) and
according to the independent model (blue dots) are plotted against the relative frequencies obtained
from the natural image patches. The solid diagonal line corresponds to a perfect match between
model and ground truth. The dashed lines enclose the regions within which deviations are to be
expected due to the finite sampling size. Since most of the red dots fall within this region, the DG
model fits the data distribution very well.
P
We also systematically evaluated the JS-divergence and the multi-information I[S] = k H[Sk ] ?
H[S] as a function of dimensionality. That is, we started with the bivariate marginal distribution
of two randomly selected pixels. Then we incrementally added more pixels of random location
until the random vector contains all the 16 pixels of the 4 ? 4 image patches. Independent of the
dimension, the JS-divergence between the DG model and the true distribution is smaller than 0.015
bits. For comparison, the JS-divergence between the independent model and the true distribution
increases with dimensionality from roughly 0.2 bits in the case of two pixels up to 0.839 bits in
the case of 16 pixels. For two independent sets of samples both drawn from natural image data the
JS-divergence ranges between 0.006 and 0.007 bits for 4 ? 4 patches setting the gold standard for
the minimal possible JS-divergence one could achieve with any model due to finite sampling size.
Carrying out the same type of analysis as in [12], we make qualitatively the same observations as it
was reported there: as shown above, we find a quite accurate match between the two distributions.
6
Figure 5: Random samples of dichotomized 32x32 patches from the van Hateren image data base (left) and
from the corresponding dichotomized Gaussian distribution with equal covariance matrix (right). For the latter, the percept of typical objects is missing due to the ignorance of higher-order correlations. This striking
difference is not obvious, however, at the level of 4x4 patches, for which we found an excellent match of the
dichotomized Gaussian to the ensemble of natural images.
Furthermore, the multi-information of the DG model (red solid line) and of the true distribution (blue
dots) increases linearly on a loglog-scale with the number of dimensions (Fig. 4 B). Both findings
can be verified only up to a rather limited number of dimensions (less than 20). Nevertheless, in [12],
two claims about the higher-dimensional statistics have been based on these two observations: First,
that pairwise correlations may be sufficient to determine the full statistics of binary responses, and
secondly, that the convergent scaling behavior in the log-log plot may indicate a transition towards
strong order.
Using natural images instead of retinal ganglion cell data, we would like to verify to what extent
the low-dimensional observations can be used to support these claims about the high-dimensional
statistics [10]. To this end we study the same kind of extrapolation (Fig. 4 B) to higher dimensions
(dashed lines) as in [12]. The difference between the entropy of the independent model and the
multi-information yields the joint entropy of the respective distribution. If the extrapolation is taken
seriously, this difference seems to vanish at the order of 50 dimensions suggesting that the joint
entropy of the neural responses approaches zero at this size?say for 7 ? 7 image patches (Fig. 4 C).
Though it was not taken literally, this point of ?freezing? has been pointed out in [12] as a critical
network size at which a transition to strong order is to be expected. The meaning of this assertion,
however, is not clear. First of all, the joint entropy of a distribution can never be smaller than the
joint entropy of any of its marginals. Therefore, the joint entropy cannot decrease with increasing
number of dimensions as the extrapolation would suggest (Fig. 4 C). Instead it would be necessary to
ask more precisely how the growth rate of the joint entropy can be characterized and whether there
is a critical number of dimensions at which the growth rate suddenly drops. In our study with natural
images, visual inspection does not indicate anything special to happen at the ?critical patch size? of
7 ? 7 pixels. Rather, for all patch sizes, the DG model yields dichotomized pink noise. In Fig. 5
(right) we show a sample from the DG model for 32?32 image patches (i.e. 1024 dimensions) which
provides no indication for a particularly interesting change in the statistics towards strong order. The
exact law according to which the multi-information grows with the number of dimensions for large
m, however, is not easily assessed and remains to be explored.
Finally, we point out that the sufficiency of pairwise correlations at the level of m = 16 dimensions
does not hold any more in the case of large m: the samples from the true distribution at the left
hand side of Fig. 5 clearly show much more structure than the samples from the DG model (Fig. 5,
right), indicating that pairwise correlations do not suffice to determine the full statistics of large
image patches. Even if the match between the DG model and the Ising model may turn out to be
less accurate in high dimensions, this would not affect our conclusion. Any mismatch would only
introduce more order in the DG model than justified by pairwise correlations only.
5
Conclusion and Outlook
We proposed a new approach to maximum entropy modeling of binary variables, extending maximum entropy analysis to previously infeasible high dimensions: As both sampling and finding pa7
rameters is easy for the dichotomized Gaussian model, it overcomes the computational drawbacks of
Monte-Carlo methods. We verified numerically that the empirical entropy of the DG model is comparable to that obtained with Gibbs sampling at least up to 20 dimensions. For practical purposes,
the DG distribution can even be superior to the Gibbs sampler in terms of entropy maximization due
to the lack of independence between consecutive samples in the Gibbs sampler.
Although the Ising model and the DG model are in principle different, the match between the two
turns out to be surprisingly good for a large region of the parameter space. Currently, we are trying
to determine where the close similarity between the Ising model and the DG model breaks down.
In addition, we explore the possibility to use the dichotomized Gaussian distribution as a proposal
density for Monte-Carlo methods such as importance sampling. As it is a very close approximation
to the Ising model, we expect this combination to yield highly efficient sampling behaviour. In
summary, by linking the DG model to the Ising model, we believe that maximum entropy modeling
of multivariate binary random variables will become much more practical in the future.
We used the DG model to investigate the role of second-order correlations in the context of sensory coding of natural images. While for small image patches the DG model provided an excellent
fit to the true distribution, we were able to show that this agreement breakes down in the case
of larger image patches. Thus caution is required when extrapolating from low-dimensional measurements to higher-dimensional distributions because higher-order correlations may be invisible in
low-dimensional marginal distributions. Nevertheless, the maximum entropy approach seems to be
a promising tool for the analysis of correlated neural activities, and the DG model can facilitate its
use significantly in practice.
Acknowledgments
We thank Jakob Macke, Pierre Garrigues, and Greg Stephens for helpful comments and stimulating discussions, as well as Alexander Ecker and Andreas Hoenselaar for last minute advice. An implementation of the DG model in Matlab and R will be avaible at our website
http://www.kyb.tuebingen.mpg.de/bethgegroup/code/DGsampling.
References
[1] D.H. Ackley, G.E. Hinton, and T.J. Sejnowski. A learning algorithm for boltzmann machines. Cognitive
Science, 9:147?169, 1985.
[2] H.B. Barlow. Sensory mechanisms, the reduction of redundancy, and intelligence. In The Mechanisation
of Thought Processes, pages 535?539, London: Her Majesty?s Stationery Office, 1959.
[3] M. Bethge. Factorial coding of natural images: How effective are linear model in removing higher-order
dependencies? J. Opt. Soc. Am. A, 23(6):1253?1268, June 2006.
[4] D.R. Cox and N. Wermuth. On some models for multivariate binary variables parallel in complexity with
the multivariate gaussian distribution. Biometrika, 89:462?469, 2002.
[5] L.J. Emrich and M.R. Piedmonte. A method for generating high-dimensional multivariate binary variates.
The American Statistician, 45(4):302?304, 1991.
[6] M. Huber. A bounding chain for swendsen-wang. Random Structures & Algorithms, 22:53?59, 2002.
[7] E.T. Jaynes. Where do we stand on maximum entropy inference. In R.D. Levine and M. Tribus, editors,
The Maximum Entropy Formalism. MIT Press, Cambridge, MA, 1978.
[8] J. Linn. Divergence measures based on the shannon entropy. IEEE Trans Inf Theory, 37:145?151, 1991.
[9] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press,
2003.
[10] Sheila H Nirenberg and Jonathan D Victor. Analyzing the activity of large populations of neurons: how
tractable is the problem? Current Opinion in Neurobiology, 17:397?400, August 2007.
[11] Karl Pearson. On a new method of determining correlation between a measured character a, and a character b, of which only the percentage of cases wherein b exceeds (or falls short of) a given intensity is
recorded for each grade of a. Biometrika, 7:96?105, 1909.
[12] Elad Schneidman, Michael J Berry, Ronen Segev, and William Bialek. Weak pairwise correlations imply
strongly correlated network states in a neural population. Nature, 440(7087):1007?1012, Apr 2006.
[13] J Shlens, JD Field, JL Gauthier, MI Grivich, D Petrusca, A Sher, AM Litke, and EJ Chichilnisky. The
structure of multi-neuron firing patterns in primate retina. J Neurosci, 26(32):8254?8266, Aug 2006.
[14] G. Tkacik, E. Schneidman, M.J. Berry, and W. Bialek. Ising models for networks of real neurons. arXiv:qbio.NC/0611072, 1:1?4, 2006.
8
| 3336 |@word trial:1 cox:1 middle:2 seems:5 grey:1 km:2 seek:1 covariance:12 pg:1 tkacik:1 outlook:1 solid:3 garrigues:1 reduction:2 contains:1 seriously:1 current:2 jaynes:1 surprising:1 activation:3 yet:1 scatter:2 si:6 intriguing:1 numerical:5 partition:1 happen:1 kyb:1 remove:1 plot:3 drop:1 update:1 extrapolating:1 intelligence:1 selected:1 website:1 inspection:2 core:1 painstaking:1 short:1 provides:2 characterization:1 philipp:1 location:1 become:2 fitting:1 introduce:2 pairwise:16 huber:1 expected:2 roughly:1 mpg:2 behavior:3 multi:7 grade:1 increasing:3 becomes:1 provided:1 underlying:1 moreover:1 circuit:1 mass:2 suffice:1 what:3 kind:1 caution:1 finding:4 transformation:1 guarantee:1 quantitative:1 growth:2 biometrika:2 scaled:2 unit:5 planck:1 before:2 local:1 limit:2 despite:1 analyzing:1 firing:1 black:2 resembles:3 limited:4 bi:2 range:4 averaged:1 unique:1 practical:2 acknowledgment:1 testing:1 practice:1 procedure:1 area:1 empirical:2 significantly:1 thought:1 convenient:2 suggest:1 cannot:2 close:4 context:2 www:1 ecker:1 missing:1 mbethge:1 x32:1 regarded:1 shlens:1 population:4 controlling:1 suppose:1 exact:1 agreement:1 crossing:1 particularly:1 ising:22 database:1 observed:2 role:5 ackley:1 wermuth:1 levine:1 solved:2 wang:1 thousand:1 region:3 connected:1 decrease:4 complexity:1 carrying:1 serve:2 mechanisation:1 completely:1 easily:2 joint:9 train:1 effective:1 london:1 monte:6 sejnowski:1 outside:1 pearson:1 quite:1 larger:3 widely:1 elad:1 say:2 otherwise:1 nirenberg:1 toeplitz:1 statistic:15 advantage:2 eigenvalue:3 indication:1 matthias:1 pdg:5 propose:1 lowdimensional:1 interaction:5 jij:5 relevant:1 achieve:1 gold:1 scalability:1 regularity:4 extending:1 generating:1 perfect:1 object:1 coupling:1 depending:1 pose:1 measured:3 ij:11 aug:1 eq:5 soc:1 strong:3 predicted:1 enclose:1 indicate:4 come:1 quantify:1 closely:3 drawback:2 correct:1 filter:1 sgn:2 opinion:1 require:1 behaviour:1 opt:1 biological:1 secondly:1 hold:1 ground:3 swendsen:1 exp:3 claim:2 early:2 consecutive:1 purpose:1 estimation:1 currently:2 agrees:1 tool:2 mit:1 clearly:2 gaussian:17 rather:4 ej:1 varying:4 office:1 signum:1 derived:1 ax:1 majesty:1 june:1 likelihood:2 litke:1 am:3 helpful:1 inference:2 hidden:2 her:1 interested:1 germany:1 pixel:11 issue:1 development:1 integration:7 special:2 mackay:1 marginal:3 field:2 equal:2 never:2 sampling:17 petrusca:1 x4:2 identical:1 represents:1 look:1 constitutes:2 future:1 employ:1 retina:1 randomly:2 dg:33 divergence:10 individual:1 statistician:1 william:1 interest:1 possibility:2 investigate:3 highly:1 light:1 chain:1 accurate:2 necessary:3 respective:2 literally:1 desired:1 plotted:3 dichotomized:16 minimal:1 formalism:1 modeling:4 assertion:1 maximization:2 deviation:1 hundred:1 usefulness:1 too:2 reported:1 dependency:1 thanks:1 density:1 systematic:1 physic:1 michael:1 bethge:2 connectivity:1 recorded:1 opposed:1 possibly:1 cognitive:1 resort:1 macke:1 american:1 suggesting:1 de:2 retinal:5 coding:4 break:1 extrapolation:5 closed:3 dichotomization:1 red:6 complicated:1 parallel:1 accuracy:1 greg:1 variance:1 percept:1 ensemble:2 correspond:2 yield:4 ronen:1 weak:1 accurately:1 produced:1 carlo:6 cybernetics:1 explain:1 reach:1 influenced:1 ihsj:2 against:2 frequency:2 involved:1 obvious:1 mi:1 popular:1 ask:1 dimensionality:6 ubiquitous:1 exceeded:1 higher:13 response:4 wherein:1 sufficiency:1 evaluated:1 though:3 strongly:1 generality:1 furthermore:1 correlation:25 until:1 hand:2 freezing:1 gauthier:1 nonlinear:1 lack:1 incrementally:1 quality:1 gray:1 indicated:1 grows:1 believe:1 facilitate:1 effect:1 hypothesized:1 verify:3 true:11 barlow:1 counterpart:1 ignorance:1 indistinguishable:1 spemannstrasse:1 sin:1 anything:1 mutli:1 generalized:1 trying:1 complete:1 demonstrate:2 invisible:1 percent:1 image:28 wise:2 meaning:1 recently:2 superior:1 empirically:1 million:1 linking:1 jl:1 marginals:2 numerically:1 measurement:3 significant:1 cambridge:2 gibbs:6 ai:2 i6:1 similarly:1 pointed:1 nonlinearity:2 dj:5 dot:9 similarity:3 base:2 j:8 multivariate:6 recent:1 conjectured:1 inf:1 discard:1 ubingen:1 binary:19 victor:1 inverted:1 additional:2 impose:1 employed:1 recognized:2 determine:5 forty:1 schneidman:2 signal:1 hsi:8 ii:2 full:5 dashed:5 stephen:1 exceeds:1 imodel:1 characterized:3 match:7 cross:1 nonstationarities:1 dkl:2 a1:1 prediction:1 arxiv:1 histogram:2 cell:4 justified:1 addition:2 proposal:1 median:2 probably:1 wkt:1 recording:1 supposing:1 elegant:1 comment:1 seem:1 near:6 easy:6 enough:3 affect:1 fit:6 zi:1 independence:1 variate:1 bandwidth:1 andreas:1 whether:3 motivated:2 hdg:4 akin:1 suffer:2 tribus:1 matlab:2 generally:2 clear:1 factorial:1 amount:2 nonparametric:1 ten:1 generate:1 http:1 exist:1 percentage:1 estimated:1 blue:5 steepness:1 redundancy:3 threshold:2 nevertheless:2 drawn:4 undersampling:2 clarity:1 idata:1 linn:1 verified:2 striking:1 patch:16 parsimonious:1 draw:1 scaling:1 comparable:1 bit:5 hi:7 followed:2 convergent:1 quadratic:2 activity:3 constraint:1 precisely:1 segev:1 sheila:1 encodes:1 extremely:1 relatively:1 conjecture:1 according:4 combination:1 poor:1 pink:2 smaller:4 character:2 primate:1 taken:2 ln:1 computationally:1 equation:2 remains:3 previously:1 turn:3 mechanism:1 know:2 tractable:1 end:1 studying:3 available:1 grivich:1 observe:1 pierre:1 alternative:1 altogether:1 existence:1 original:1 jd:1 responding:1 standardized:1 log2:1 exploit:1 especially:2 classical:1 suddenly:1 already:1 question:2 spike:3 quantity:2 strategy:1 parametric:1 added:1 traditional:1 diagonal:3 bialek:2 exhibit:2 gradient:2 higherorder:1 thank:1 seven:1 extent:2 tuebingen:2 code:2 modeled:1 relationship:1 nc:1 difficult:5 unfortunately:2 negative:2 rise:1 implementation:1 boltzmann:6 twenty:2 neuron:9 observation:3 finite:2 descent:1 extended:2 hinton:1 neurobiology:1 varied:1 jakob:1 august:1 intensity:6 inferred:1 introduced:2 required:3 specified:3 chichilnisky:1 errorbars:1 learned:2 trans:1 able:2 beyond:1 below:2 pattern:3 mismatch:1 max:1 power:4 suitable:1 demanding:1 natural:18 critical:3 nth:1 eye:1 imply:1 axis:1 created:1 started:1 sher:1 understanding:1 berry:2 determining:2 relative:2 law:4 loss:1 expect:1 interesting:1 rameters:1 enclosed:1 affine:1 sufficient:1 consistent:1 principle:1 editor:1 systematically:2 pi:9 karl:1 prone:1 summary:2 surprisingly:2 last:1 infeasible:1 side:2 institute:1 wide:1 fall:3 van:3 dimension:31 evaluating:2 cumulative:1 transition:2 stand:1 sensory:4 qualitatively:2 bm:1 oftentimes:2 sj:6 approximate:1 obtains:1 overcomes:1 b1:1 consuming:2 spectrum:3 sk:2 additionally:1 promising:1 learn:1 zk:2 nature:1 investigated:1 complex:1 berens:2 excellent:2 apr:1 linearly:1 neurosci:1 bounding:1 noise:2 repeated:2 fig:21 advice:1 slow:1 precision:1 position:2 exponential:2 candidate:1 vanish:1 loglog:1 removing:2 down:2 minute:1 jensen:1 explored:1 decay:1 bivariate:2 importance:1 perceptually:1 easier:1 entropy:42 led:1 likely:2 explore:3 ganglion:4 univariate:1 visual:5 monotonic:1 corresponds:2 truth:3 ma:1 stimulating:1 goal:1 presentation:1 towards:2 twofold:1 replace:1 feasible:3 hard:2 change:3 typical:1 except:1 justify:1 sampler:2 called:1 shannon:2 rarely:1 indicating:1 support:1 latter:1 assessed:2 alexander:1 jonathan:1 hateren:3 evaluate:1 instructive:1 correlated:2 |
2,577 | 3,337 | Discriminative Log-Linear Grammars
with Latent Variables
Slav Petrov and Dan Klein
Computer Science Department, EECS Division
University of California at Berkeley, Berkeley, CA, 94720
{petrov, klein}@cs.berkeley.edu
Abstract
We demonstrate that log-linear grammars with latent variables can be practically
trained using discriminative methods. Central to efficient discriminative training
is a hierarchical pruning procedure which allows feature expectations to be efficiently approximated in a gradient-based procedure. We compare L1 and L2 regularization and show that L1 regularization is superior, requiring fewer iterations
to converge, and yielding sparser solutions. On full-scale treebank parsing experiments, the discriminative latent models outperform both the comparable generative latent models as well as the discriminative non-latent baselines.
1 Introduction
In recent years, latent annotation of PCFG has been shown to perform as well as or better than standard lexicalized methods for treebank parsing [1, 2]. In the latent annotation scenario, we imagine
that the observed treebank is a coarse trace of a finer, unobserved grammar. For example, the single
treebank category NP (noun phrase) may be better modeled by several finer categories representing
subject NPs, object NPs, and so on. At the same time, discriminative methods have consistently
provided advantages over their generative counterparts, including less restriction on features and
greater accuracy [3, 4, 5]. In this work, we therefore investigate discriminative learning of latent
PCFGs, hoping to gain the best from both lines of work.
Discriminative methods for parsing are not new. However, most discriminative methods, at least
those which globally trade off feature weights, require repeated parsing of the training set, which is
generally impractical. Previous work on end-to-end discriminative parsing has therefore resorted to
?toy setups,? considering only sentences of length 15 [6, 7, 8] or extremely small corpora [9]. To get
the benefits of discriminative methods, it has therefore become common practice to extract n-best
candidate lists from a generative parser and then use a discriminative component to rerank this list.
In such an approach, repeated parsing of the training set can be avoided because the discriminative
component only needs to select the best tree from a fixed candidate list. While most state-of-the-art
parsing systems apply this hybrid approach [10, 11, 12], it has the limitation that the candidate list
often does not contain the correct parse tree. For example 41% of the correct parses were not in the
candidate pool of ?30-best parses in [10].
In this paper we present a hierarchical pruning procedure that exploits the structure of the model
and allows feature expectations to be efficiently approximated, making discriminative training of
full-scale grammars practical. We present a gradient-based procedure for training a discriminative
grammar on the entire WSJ section of the Penn Treebank (roughly 40,000 sentences containing
1 million words). We then compare L1 and L2 regularization and show that L1 regularization is
superior, requiring fewer iterations to converge and yielding sparser solutions. Independent of the
regularization, discriminative grammars significantly outperform their generative counterparts in our
experiments.
1
FRAG
ROOT
ROOT
NP
FRAG
FRAG-x
RB
.
FRAG
Not
DT
NN
this
year
.
RB
.
NP
Not
DT
(a)
.
NN
this
year
(b)
FRAG-x
RB-x
Not
.-x
NP-x
DT-x
this
.
NN-x
year
(c)
Figure 1: (a) The original tree. (b) The (binarized) X-bar tree. (c) The annotated tree.
2 Grammars with latent annotations
Context-free grammars (CFGs) underlie most high-performance parsers in one way or another [13,
12, 14]. However, a CFG which simply takes the empirical productions and probabilities off of
a treebank does not perform well. This naive grammar is a poor one because its context-freedom
assumptions are too strong in some places and too weak in others. Therefore, a variety of techniques
have been developed to both enrich and generalize the naive grammar. Recently an automatic statesplitting approach was shown to produce state-of-the art performance [2, 14]. We extend this line of
work by investigating discriminative estimation techniques for automatically refined grammars.
We consider grammars that are automatically derived from a raw treebank. Our experiments are
based on a completely unsplit X-bar grammar, obtained directly from the Penn Treebank by the
binarization procedure shown in Figure 1. For each local tree rooted at an evaluation category X,
we introduce a cascade of new nodes labeled X so that each has two children in a right branching
fashion. Each node is then refined with a latent variable, splitting each observed category into k
unobserved subcategories. We refer to trees over unsplit categories as parse trees and trees over
split categories as derivations.
Our log-linear grammars are parametrized by a vector ? which is indexed by productions X ? ?.
The conditional probability of a derivation tree t given a sentence w can be written as:
Y
T
1
1
P? (t|w) =
e?X?? =
e? f (t)
(1)
Z(?, w)
Z(?, w)
X???t
where Z(?, w) is the partition function and f (t) is a vector indicating how many times each production occurs in the derivation t. The inside/outside algorithm [15] gives us an efficient way of
summing over an exponential number of derivations. Given a sentence w spanning the words
w1 , w2 , . . . , wn = w1:n , the inside and outside scores of a (split) category A spanning (i, j) are
computed by summing over all possible children B and C spanning (i, k) and (k, j) respectively:1
X X
SIN (A, i, j) =
?A?BC ? SIN (B, i, k) ? SIN (C, k, j)
A?BC i<k<j
SOUT (A, i, j) =
X
X
?B?CA ? SOUT (B, k, j) ? SIN (C, k, i) +
X
X
?B?AC ? SOUT (B, i, k) ? SIN (C, j, k),
B?CA 1?k<i
(2)
B?AC j<k?n
where we use ?A?BC = e?A?BC . In the generative case these scores correspond to the inside and
def
def
outside probabilities SIN (A, i, j) = PIN (A, i, j) = P(wi:j |A) and SOUT (A, i, j) = POUT (A, i, j) =
1:i
j:n
P(w Aw ) [15]. The scores lack this probabilistic interpretation in the discriminative case, but
they can nonetheless be normalized in the same way as probabilities to produce the expected counts
of productions needed at training time. The posterior probability of a production A ? BC spanning
(i, j) with split point k in a sentence is easily expressed as:
hA ? BC, i, j, ki
? SOUT (A, i, j) ? ?A?BC ? SIN (B, i, k) ? SIN (C, k, j)
(3)
To obtain a grammar from the training trees, we want to learn a set of grammar parameters ? on
latent annotations despite the fact that the original trees lack the latent annotations. We will consider
1
Although we show only the binary component, of course both binary and unary productions are included.
2
generative grammars, where the parameters ? are set to maximize the joint likelihood of the training sentences and their parse trees, and discriminative grammars, where the parameters ? are set to
maximize the likelihood of the correct parse tree (vs. all possible trees) given a sentence. Previous
work on automatic grammar refinement has focused on different estimation techniques for learning
generative grammars with latent labels (training with basic EM [1], an EM-based split and merge
approach [2], a non-parametric variational approach [16]). In the following, we review how generative grammars are learned and present an algorithm for estimating discriminative grammars with
latent variables.
2.1 Generative Grammars
Generative grammars with latent variables can be seen as tree structured hidden Markov models. A
simple EM algorithm [1] allows us to learn parameters for generative grammars which maximize
the log joint likelihood of the training sentences w and parse trees T :
Y
YX
Ljoint (?) = log
P? (wi , Ti ) = log
P? (wi , t),
(4)
i
i
t:Ti
where t are derivations (over split categories) corresponding to the observed parse tree (over unsplit
categories). In the E-Step we compute inside/outside scores over the set of derivations corresponding
to the observed gold tree by restricting the sums in Eqn. 2 to produce only such derivations. 2
We then use Eqn. 3 to compute expectations which are normalized in the M-Step to update the
production probabilities ?X?? = e?X?? to their maximum likelihood estimates:
P
T E? [fX?? (t)|T ]
?X?? = P P
(5)
??
T E? [fX?? ? (t)|T ]
Here, E? [fX?? (t)|T ] denotes the expected count of the production (or feature) X ? ? with respect
to P? in the set of derivations t, which are consistent with the observed parse tree T . Similarly, we
will write E? [fX?? (t)|w] for the expectation over all derivations of the sentence w.
Our generative
grammars with latent variables are probabilistic context-free grammars (CFGs),
P
where ? ? ?X?? ? = 1 and Z(?) = 1. Note, however, that this normalization constraint poses
no restriction on the model class, as probabilistic and weighted CFGs are equivalent [18].
2.2 Discriminative Grammars
Discriminative grammars with latent variables can be seen as conditional random fields [4] over
trees. For discriminative grammars, we maximize the log conditional likelihood:
Y
Y X e?T f (t)
Lcond (?) = log
P? (Ti |wi ) = log
(6)
Z(?, wi )
i
i
t:Ti
We directly optimize this non-convex objective function using a numerical gradient based method
(LBFGS [19] in our implementation).3 Fitting the log-linear model involves the following derivatives:
?Lcond (?) X
=
E? [fX?? (t)|Ti ] ? E? [fX?? (t)|wi ] ,
(7)
??X??
i
where the first term is the expected count of a production in derivations corresponding to the correct
parse tree and the second term is the expected count of the production in all parses.
The challenge in estimating discriminative grammars is that the computation of some quantities
requires repeatedly taking expectations over all parses of all sentences in the training set. We will
discuss ways to make their computation on large data sets practical in the next section.
2
Since the tree structure is observed this can be done in linear time [17].
Alternatively, maximum conditional likelihood estimation can also be seen as a special case of maximum
likelihood estimation, where P(w) is assumed to be the empirical one and not learned. The conditional likelihood optimization can therefore be addressed by an EM algorithm which is similar to the generative case.
However, while the E-Step remains the same, the M-Step involves fitting a log-linear model, which requires
optimization, unlike the joint case, which can be done analytically using relative frequency estimators. This EM
algorithm typically converges to a comparable local maximum as direct optimization of the objective function
but requires 3-4 times more iterations.
3
3
3 Efficient Discriminative Estimation
Computing the partition function in Eqn. 6 requires parsing of the entire training corpus. Even with
recent advances in parsing efficiency and fast CPUs, parsing the entire corpus repeatedly remains
prohibitive. Fast parsers like [12, 14] can parse several sentences per second, but parsing the 40,000
training sentences still requires more than 5 hours on a fast machine. Even in a parallel implementation, parsing the training corpus several hundred times, as necessary for discriminative training,
would and, in fact, did in the case of maximum margin training [6], require weeks. Generally speaking, there are two ways of speeding up the training process: reducing the total number of training
iterations and reducing the time required per iteration.
3.1 Hierarchical Estimation
The number of training iterations can be reduced by training models of increasing complexity in a
hierarchical fashion. For example in mixture modeling [20] and machine translation [21], a sequence
of increasingly more complex models is constructed and each model is initialized with its (simpler)
predecessor. In our case, we begin with the unsplit X-Bar grammar and iteratively split each category
in two and re-train the grammar. In each iteration, we initialize with the results of the smaller
grammar, splitting each annotation category in two and adding a small amount of randomness to
break symmetry. In addition to reducing the number of training iterations, hierarchical training has
been shown to lead to better parameter estimates [2]. However, even with hierarchical training,
large-scale discriminative training will remain impractical, unless we can reduce the time required
to parse the training corpus.
3.2 Feature-Count Approximation
High-performance parsers have employed coarse-to-fine pruning schemes, where the sentence is
rapidly pre-parsed with increasingly more complex grammars [22, 14]. Any constituent with sufficiently low posterior probability triggers the pruning of its refined variants in subsequent passes.
While this method has no theoretical guarantees, it has been empirically shown to lead to a 100-fold
speed-up without producing search errors [14].
Instead of parsing each sentence exhaustively with the most complex grammar in each iteration,
we can approximate the expected feature counts by parsing in a hierarchical coarse-to-fine scheme.
We start by parsing exhaustively with the X-Bar grammar and then prune constituents with low
posterior probability (e?10 in our experiments).4 We then continue to parse with the next more
refined grammar, skipping over constituents whose less refined predecessor has been pruned. After
parsing with the most refined grammar, we extract expected counts from the final (sparse) chart.
The expected counts will be approximations because many small counts have been set to zero by the
pruning procedure.
Even though this procedure speeds-up each training iteration tremendously, training remains prohibitively slow. We can make repeated parsing of the same sentences significantly more efficient
by caching the pruning history from one training iteration to the next. Instead of computing each
stage in the coarse-to-fine scheme for every pass, we can compute it once when we start training a
grammar and update only the final, most refined scores in every iteration. Cached pruning has the
positive side effect of constraining subcategories to refine their predecessors, so that we do not need
to worry about issues like subcategory drift and projections [14].
As only extremely unlikely items are removed from the chart, pruning has virtually no effect on
the conditional likelihood. Pruning more aggressively leads to a training procedure reminiscent of
contrastive estimation [23], where the denominator is restricted to a neighborhood of the correct
parse tree (rather than containing all possible parse trees). In our experiments, pruning more aggressively did not hurt performance for grammars with few subcategories, but limited the performance
of grammars with many subcategories.
4
Even a tighter threshold produced no search errors on a held out set in [14]. We enforce that the gold parse
is always reachable.
4
Constructed constituents
per sentence
20000
15000
No pruning
Coarse-to-fine pruning
Precomputed pruning
10000
5000
0
1 2
4
8
16
Number of latent subcategories
PARSING TIME
1 subcategory
2 subcategories
4 subcategories
8 subcategories
16 subcategories
coarse-to-fine
350 min
390 min
434 min
481 min
533 min
cached pruning
30 min
40 min
44 min
47 min
52 min
(a)
(b)
Figure 2: Average number of constructed constituents per sentence (a) and time to parse the training
corpus for different pruning regimes and grammar sizes (b).
4 Results
We ran our experiments on the Wall Street Journal (WSJ) portion of the English Penn Treebank
using the standard setup: we trained on sections 2 to 21. Section 22 was used as development set for
intermediate results. All of section 23 was reserved for the final test. We used the EVALB parseval
reference implementation for scoring. We will report F1 -scores5 and exact match percentages. For
the final test, we selected the grammar that performed best on the development set.
For our lexicon, we used a simple approach where rare words (seen five times or less during training)
are replaced by one of 50 unknown word tokens based on a small number of word-form features.
To parse new sentences with a grammar, we compute the posterior distribution over productions at
each span and extract the tree with the maximum expected number of correct productions [14].
4.1 Efficiency
The average number of constituents that are constructed while parsing a sentence is a good indicator
for the efficiency of our cached pruning scheme.6 Figure 2(a) shows the average number of chart
items that are constructed per sentence. Coarse-to-fine pruning refers to hierarchical pruning without
caching [14] and while it is better than no-pruning, it still constructs a large number of constituents
for heavily refined grammars. In contrast, with cached pruning the number of constructed chart
items stays roughly constant (or even decreases) when the number of subcategories increases. The
reduced number of constructed constituents results in a 10-fold reduction of parsing time, see Figure
2(b), and makes discriminative training on a large scale corpus computationally feasible.
We found that roughly 100-150 training iterations were needed for LBFGS to converge after each
split. Distributing the training over several machines is straightforward as each sentence can be
parsed independently of all other sentences. Starting from an unsplit X-Bar grammar we were able
to hierarchically train a 16 substate grammar in three days using eight CPUs in parallel.7
It should be also noted that we can expedite training further by training in an interleaved mode, where
after splitting a grammar we first run generative training for some time (which is very fast) and then
use the resulting grammar to initialize the discriminative training. In such a training regime, we only
needed around 50 iterations of discriminative training until convergence, significantly speeding up
the training, while maintaining the same final performance.
4.2 Regularization
Regularization is often necessary to prevent discriminative models from overfitting on the training
set. Surprisingly enough, we found that no regularization was necessary when training on the entire training set, even in the presence of an abundance of features. During development we trained
on subsets of the training corpus and found that regularization was crucial for preventing overfit5
R
The harmonic mean of precision P and recall R: P2P+R
.
The other main factor determining the parsing time is the grammar size.
7
Memory limitations prevent us from learning grammars with more subcategories, a problem that could be
alleviated by merging back the least usefull splits as in [2].
6
5
E XACT MATCH
generative discriminative
7.6
7.8
14.6
20.1
24.6
31.3
31.4
37.0
35.8
39.4
1 subcategory
2 subcategories
4 subcategories
8 subcategories
16 subcategories
F1 - SCORE
generative discriminative
64.8
67.3
76.4
80.8
83.7
85.6
86.6
87.8
88.7
89.3
Table 1: Discriminative training is superior to generative training for exact match and for F1 -score.
1 subcategory
2 subcategories
4 subcategories
8 subcategories
16 subcategories
F1 -score
67.3
80.8
85.6
87.8
89.3
L1 regularization
Exact # Feat.
7.8
23 K
20.1
74 K
31.3
147 K
37.0
318 K
39.4
698 K
# Iter.
44
108
99
82
75
F1 -score
67.4
80.3
85.7
87.6
89.1
L2 regularization
Exact
# Feat.
7.9
35 K
19.5
123 K
31.5
547 K
36.9
2,983 K
38.7 11,489 K
# Iter.
67
132
148
111
102
Table 2: L1 regularization produces sparser solutions and requires fewer training iterations than L2
regularization.
ting. This result is in accordance with [16] where a variational Bayesian approach was found to be
beneficial for small training sets but performed on par with EM for large amounts of training data.
Regularization is achieved by adding a penalty term to the conditional log likelihood function
Lcond (?). This penalty term is often a weighted norm of the parameter vector and thereby penalizes
large parameter values. We investigated L1 and L2 regularization:
X ?X?? 2
1 X |?X?? |
??
?
Lcond (?) = Lcond (?) ?
(8)
Lcond (?) = Lcond (?) ?
2
?
?
X??
X??
where the regularization parameter ? is tuned on a held out set. In the L2 case, the penalty term is
a convex and differentiable function of the parameters and hence can be easily intergrated into our
training procedure. In the L1 case, however, the penalty term is discontinuous whenever some parameter equals zero. To handle the discontinuinty of the gradient, we used the orthant-wise limitedmemory quasi-Newton algorithm of [24].
Table 2 shows that while there is no significant performance difference in models trained with L1 or
L2 regularization, there is significant difference in the number of training iterations and the sparsity
of the parameter vector. L1 regularization leads to extremely sparse parameter vectors (96% of the
parameters are zero in the 16 subcategory case), while no parameter value becomes exactly zero with
L2 regularization. It remains to be seen how this sparsity can be exploited, as these zeros become
ones when exponentiated in order to be used in the computation of inside and outside scores.
4.3 Final Test Set Results
Table 1 shows a comparison of generative and discriminative grammars for different numbers of
subcategories. Discriminative training is superior to generative training for exact match as well as for
F1 -score for all numbers of subcategories. For our largest grammars, we see absolute improvements
of 3.63% and 0.61% in exact match and F1 score respectively. The better performance is due to
better parameter estimates, as the model classes defined by the generative and discriminative model
(probabilistic vs. weighted CFGs) are equivalent [18] and the same feature sets were used in all
experiments.
Our final test set parsing F1 -score of 88.8/88.3 (40 word sentences/all sentences) is better than most
other systems, including basic generative latent variable grammars [1] (F1 -score of 86.7/86.1) and
even fully lexicalized systems [13] (F1 -score of 88.6/88.2), but falls short of the very best systems
[12, 14], which achieve accuracies above 90%. However, many of the techniques used in [12, 14]
are orthogonal to what was presented here (additional non-local/overlapping features, merging of
unnecessary splits) and could be incorporated into the discriminative model.
6
Loss in F1 score
80% M ERGING
only in
generative
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1.2
-1.4
-1.6
generative
discriminative
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Merging Percentage
1
Grammar
common
S, S, SBAR,
NP, NP,
VP, VP
Lexicon
NNP, NNS,
CD, WP$
DT, CC, IN,
VBD, VB, VBZ,
NN, RB, JJ
only in
discriminative
ADJP, SINV
VBG
-
(a)
(b)
Figure 3: (a) Loss in F1 score for different amounts of merging. (b) Categories with two subcategories after merging 80% of the subcategories according to the merging criterion in [2].
4.4 Analysis
Generatively trained grammars with latent variables have been shown to exhibit many linguistically
interpretable phenomena [2]. Space does not permit a thorough exposition, and post hoc analysis of
learned structures is prone to seeing what one expects, but nonetheless it can be helpful to illustrate
the broad patterns that are learned. Not surprisingly, many comparable trends can be observed in
generatively and discriminatively trained grammars. For example, the same subdivisions of the
determiner category (DT) into definite (the), indefinite (a), demonstrative (this) and quantificational
(some) elements emerge under both training regimes. Another example is the preposition category
(IN) where subcategories for subordinating conjunctions like (that) and different types of proper
prepositions are learned. Typically the divisions in the discriminative grammars are much more
pronounced, putting the majority of the weight on a few dominant words.
While many similarities can be found, it is especially interesting to examine how generative and
discriminative grammars differ. The nominal categories in generative grammars exhibit many clusters of semantic nature (e.g. subcategories for dates, monetary units, capitalized words, etc.). For
example, the following two subcategories of the proper noun (NNP) category {New, San, Wall} and
{York, Francisco, Street} (here represented by the three most likely words) are learned by the generative grammars. These subcategories are very useful for modeling correlations when generating
words and many clusters with such semantic patterns appear in the generative grammars. However, these clusters do not interact strongly with disambiguation and are therefore not learned by
the discriminative grammars. Similar observations hold for plural proper nouns (NNPS), superlative
adjectives (JJS), and cardinal numbers (CD), which are heavily split into semantic subcategories in
the generative grammars but are split very little or not at all in the discriminative grammars.
Examining the phrasal splits is much more intricate. We therefore give just one example from
grammars with two subcategories, which illustrates the main difference between generative and discriminative grammars. Simple declarative clauses (S) are the most common sentences in the Penn
Treebank, and in the generative case the most likely expansion of the ROOT category is ROOT?S1 ,
being chosen 91% of the time. In the discriminative case this production is only the third likeliest
with a weight of 13.2. The highest weighted expansion of the ROOT in the discriminative grammar
is ROOT?SBARQ1, with a weight of 46.5, a production that has a probability of 0.3% in the generative grammar. While generative grammars model the empirical distributions of productions in the
training set, discriminative grammars maximize the discriminative power of the model. This can for
example result in putting the majority of the weight on underrepresented productions.
We applied the merging criterion suggested in [2] to two grammars with two subcategories in order
to quantitatively examine how many subcategories are learned. This criterion approximates the loss
in joint likelihood incurred from merging two subcategories and we extended it to approximate the
loss in conditional likelihood from merging two subcategories at a given node. Figure 3(a) shows
the loss in F1 -score when the least useful fraction of the subcategories are merged. Our observation
that the discriminative grammars learn far fewer clusters are confirmed, as one can merge back 80%
of the subcategories at almost no loss in F1 (while one can merge only 50% in the generative case).
This suggest that one can learn discriminative grammars which are significantly more compact and
accurate than their generative counterparts. Figure 3(b) shows which categories remain split when
80% of the splits are merged. While there is a substantial overlap between the learned splits, one
can see that joint likelihood can be better maximized by refining the lexicon, while conditional
likelihood is better maximized by refining the grammar.
7
5 Conclusions and Future Work
We have presented a hierarchical pruning procedure that allows efficient discriminative training of
log-linear grammars with latent variables. We avoid repeated computation of similar quantities by
caching information between training iterations and approximating feature expectations. We presented a direct gradient-based procedure for optimizing the conditional likelihood function which
in our experiments on full-scale treebank parsing lead to discriminative latent models which outperform both the comparable generative latent models, as well as the discriminative non-latent baselines. We furthemore investigated different regularization penalties and showed that L1 regularization leads to extremely sparse solutions
While our results are encouraging, this is merely a first investigation into large-scale discriminative
training of latent variable grammars and opens the door for many future experiments: discriminative grammars allow the seamless integration of non-local and overlapping features and it will be
interesting to see how proven features from reranking systems [10, 11, 12] and other orthogonal
improvements like merging and smoothing [2] will perform in an end-to-end discriminative system.
References
[1] T. Matsuzaki, Y. Miyao, and J. Tsujii. Probabilistic CFG with latent annotations. In ACL ?05, 2005.
[2] S. Petrov, L. Barrett, R. Thibaux, and D. Klein. Learning accurate, compact, and interpretable tree annotation. In ACL ?06, 2006.
[3] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive Bayes. In NIPS ?02, 2002.
[4] J. Lafferty, A. McCallum, and F. Pereira. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In ICML ?01, 2001.
[5] D. Klein and C. Manning. Conditional structure vs conditional estimation in NLP models. In EMNLP
?02, 2002.
[6] B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. Max-margin parsing. In EMNLP ?04, 2004.
[7] J. Henderson. Discriminative training of a neural network statistical parser. In ACL ?04, 2004.
[8] J. Turian, B. Wellington, and I. D. Melamed. Scalable discriminative learning for natural language parsing
and translation. In NIPS ?07, 2007.
[9] M. Johnson. Joint and conditional estimation of tagging and parsing models. In ACL ?01, 2001.
[10] M. Collins. Discriminative reranking for natural language parsing. In ICML ?00, 2000.
[11] T. Koo and M. Collins. Hidden-variable models for discriminative reranking. In EMNLP ?05, 2005.
[12] E. Charniak and M. Johnson. Coarse-to-Fine N-Best Parsing and MaxEnt Discriminative Reranking. In
ACL?05, 2005.
[13] M. Collins. Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, UPenn., 1999.
[14] S. Petrov and D. Klein. Improved inference for unlexicalized parsing. In HLT-NAACL ?07, 2007.
[15] K. Lari and S. Young. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language, 1990.
[16] P. Liang, S. Petrov, M. I. Jordan, and D. Klein. The infinite PCFG using hierarchical Dirichlet processes.
In EMNLP ?07, 2007.
[17] F. Pereira and Y. Schabes. Inside-outside reestimation from partially bracketed corpora. In ACL, 1992.
[18] N. A. Smith and M. Johnson. Weighted and probabilistic context-free grammars are equally expressive.
To appear in Computational Lingusitics, 2007.
[19] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.
[20] N. Ueda, R. Nakano, Z. Ghahramani, and G. E. Hinton. Split and merge EM algorithm for mixture
models. Neural Computation, 12(9):2109?2128, 2000.
[21] P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, and R. L. Mercer. The mathematics of statistical machine
translation. Computational Lingusitics, 19(2), 1993.
[22] E. Charniak, M. Johnson, D. McClosky, et al. Multi-level coarse-to-fine PCFG Parsing. In HLT-NAACL
?06, 2006.
[23] N. A. Smith and J. Eisner. Contrastive estimation: Training log-linear models on unlabeled data. In ACL
?05, 2005.
[24] G. Andrew and J. Gao. Scalable training of L1-regularized log-linear models. In ICML ?07, 2007.
8
| 3337 |@word norm:1 open:1 contrastive:2 thereby:1 reduction:1 subordinating:1 generatively:2 score:18 charniak:2 tuned:1 bc:7 skipping:1 written:1 parsing:31 reminiscent:1 subsequent:1 numerical:2 partition:2 hoping:1 interpretable:2 update:2 v:4 generative:36 fewer:4 prohibitive:1 item:3 selected:1 reranking:4 mccallum:1 smith:2 short:1 coarse:9 node:3 lexicon:3 nnp:2 simpler:1 five:1 constructed:7 direct:2 become:2 predecessor:3 dan:1 fitting:2 inside:7 introduce:1 upenn:1 intricate:1 tagging:1 expected:8 roughly:3 examine:2 multi:1 globally:1 automatically:2 cpu:2 little:1 encouraging:1 considering:1 increasing:1 becomes:1 provided:1 estimating:2 begin:1 what:2 developed:1 unobserved:2 impractical:2 guarantee:1 berkeley:3 every:2 binarized:1 ti:5 thorough:1 exactly:1 prohibitively:1 classifier:1 unit:1 penn:4 underlie:1 appear:2 producing:1 segmenting:1 positive:1 local:4 accordance:1 despite:1 koo:1 merge:4 acl:7 pcfgs:1 limited:1 practical:2 practice:1 definite:1 procedure:11 empirical:3 significantly:4 cascade:1 projection:1 alleviated:1 word:10 pre:1 refers:1 seeing:1 suggest:1 get:1 unlabeled:1 context:5 restriction:2 equivalent:2 optimize:1 limitedmemory:1 straightforward:1 starting:1 independently:1 convex:2 focused:1 underrepresented:1 splitting:3 estimator:1 handle:1 fx:6 hurt:1 phrasal:1 imagine:1 trigger:1 parser:5 heavily:2 exact:6 nominal:1 melamed:1 trend:1 element:1 approximated:2 labeled:1 observed:7 taskar:1 trade:1 removed:1 decrease:1 highest:1 ran:1 substantial:1 complexity:1 exhaustively:2 trained:6 division:2 efficiency:3 completely:1 easily:2 joint:6 represented:1 derivation:10 train:2 fast:4 labeling:1 outside:7 refined:8 neighborhood:1 whose:1 grammar:81 cfg:2 schabes:1 final:7 hoc:1 advantage:1 sequence:2 differentiable:1 monetary:1 rapidly:1 date:1 achieve:1 gold:2 pronounced:1 constituent:8 convergence:1 cluster:4 produce:4 cached:4 wsj:2 converges:1 generating:1 object:1 illustrate:1 andrew:1 ac:2 pose:1 strong:1 c:1 involves:2 differ:1 merged:2 correct:6 annotated:1 discontinuous:1 capitalized:1 stochastic:1 require:2 f1:14 wall:2 investigation:1 tighter:1 hold:1 practically:1 sufficiently:1 around:1 wright:1 week:1 determiner:1 estimation:11 linguistically:1 label:1 largest:1 weighted:5 always:1 xact:1 rather:1 caching:3 avoid:1 evalb:1 conjunction:1 derived:1 refining:2 improvement:2 consistently:1 likelihood:15 contrast:1 tremendously:1 baseline:2 helpful:1 inference:1 nn:4 unary:1 entire:4 typically:2 unlikely:1 hidden:2 koller:1 quasi:1 issue:1 development:3 enrich:1 smoothing:1 noun:3 art:2 special:1 initialize:2 field:2 once:1 construct:1 equal:1 ng:1 integration:1 broad:1 icml:3 future:2 np:8 others:1 quantitatively:1 report:1 cardinal:1 few:2 pietra:2 replaced:1 freedom:1 investigate:1 evaluation:1 henderson:1 mcclosky:1 mixture:2 yielding:2 held:2 accurate:2 necessary:3 orthogonal:2 unless:1 tree:27 indexed:1 maxent:1 initialized:1 re:1 penalizes:1 theoretical:1 modeling:2 cfgs:4 phrase:1 pout:1 rare:1 subset:1 hundred:1 expects:1 examining:1 johnson:4 too:2 thibaux:1 aw:1 eec:1 nns:1 stay:1 seamless:1 sout:5 off:2 probabilistic:7 pool:1 w1:2 thesis:1 central:1 containing:2 vbd:1 emnlp:4 derivative:1 toy:1 bracketed:1 performed:2 root:6 break:1 portion:1 start:2 bayes:1 parallel:2 annotation:8 p2p:1 chart:4 accuracy:2 reserved:1 efficiently:2 maximized:2 correspond:1 vp:2 generalize:1 weak:1 raw:1 bayesian:1 produced:1 confirmed:1 cc:1 finer:2 randomness:1 history:1 matsuzaki:1 whenever:1 hlt:2 petrov:5 nonetheless:2 frequency:1 gain:1 recall:1 intergrated:1 back:2 worry:1 dt:5 day:1 improved:1 done:2 though:1 strongly:1 just:1 stage:1 until:1 correlation:1 eqn:3 parse:16 expressive:1 overlapping:2 lack:2 mode:1 tsujii:1 logistic:1 effect:2 naacl:2 requiring:2 contain:1 normalized:2 counterpart:3 brown:1 regularization:21 analytically:1 aggressively:2 hence:1 iteratively:1 wp:1 semantic:3 sin:8 during:2 branching:1 demonstrative:1 rooted:1 noted:1 criterion:3 demonstrate:1 l1:12 variational:2 harmonic:1 wise:1 recently:1 superior:4 common:3 empirically:1 clause:1 million:1 extend:1 interpretation:1 approximates:1 refer:1 significant:2 automatic:2 mathematics:1 similarly:1 language:4 reachable:1 similarity:1 etc:1 dominant:1 posterior:4 recent:2 showed:1 optimizing:1 driven:1 scenario:1 frag:5 binary:2 continue:1 exploited:1 scoring:1 seen:5 greater:1 additional:1 employed:1 prune:1 converge:3 maximize:5 wellington:1 full:3 expedite:1 match:5 post:1 equally:1 variant:1 basic:2 regression:1 denominator:1 scalable:2 expectation:6 iteration:17 normalization:1 achieved:1 addition:1 want:1 fine:8 addressed:1 crucial:1 w2:1 unlike:1 pass:1 subject:1 virtually:1 lafferty:1 jordan:2 presence:1 door:1 constraining:1 split:16 intermediate:1 wn:1 enough:1 variety:1 reduce:1 distributing:1 penalty:5 speech:1 speaking:1 york:1 jj:1 repeatedly:2 generally:2 useful:2 amount:3 category:18 reduced:2 outperform:3 percentage:2 per:5 rb:4 klein:7 write:1 iter:2 indefinite:1 putting:2 threshold:1 prevent:2 nocedal:1 resorted:1 merely:1 fraction:1 year:4 sum:1 run:1 place:1 almost:1 ljoint:1 ueda:1 disambiguation:1 vb:1 comparable:4 interleaved:1 def:2 ki:1 fold:2 refine:1 constraint:1 speed:2 extremely:4 min:10 pruned:1 span:1 department:1 structured:1 according:1 slav:1 poor:1 manning:2 smaller:1 remain:2 em:7 increasingly:2 beneficial:1 wi:6 making:1 s1:1 restricted:1 computationally:1 lari:1 remains:4 pin:1 count:9 discus:1 precomputed:1 needed:3 end:4 permit:1 apply:1 eight:1 hierarchical:10 enforce:1 original:2 denotes:1 dirichlet:1 nlp:1 superlative:1 maintaining:1 newton:1 nakano:1 yx:1 exploit:1 parsed:2 ting:1 ghahramani:1 especially:1 miyao:1 approximating:1 eisner:1 objective:2 quantity:2 occurs:1 unsplit:5 parametric:1 exhibit:2 gradient:5 parametrized:1 street:2 majority:2 unlexicalized:1 spanning:4 declarative:1 length:1 modeled:1 furthemore:1 liang:1 setup:2 trace:1 implementation:3 proper:3 unknown:1 perform:3 subcategory:5 observation:2 markov:1 orthant:1 extended:1 incorporated:1 head:1 hinton:1 drift:1 required:2 sentence:25 california:1 learned:9 hour:1 nip:2 able:1 bar:5 suggested:1 pattern:2 regime:3 sparsity:2 challenge:1 adjective:1 including:2 memory:1 max:1 power:1 overlap:1 natural:3 hybrid:1 regularized:1 indicator:1 representing:1 scheme:4 extract:3 naive:3 speeding:2 binarization:1 review:1 l2:8 determining:1 relative:1 subcategories:35 par:5 rerank:1 parseval:1 fully:1 loss:6 limitation:2 discriminatively:1 interesting:2 proven:1 incurred:1 consistent:1 mercer:1 vbz:1 treebank:11 cd:2 production:16 translation:3 prone:1 course:1 preposition:2 token:1 surprisingly:2 free:4 english:1 side:1 exponentiated:1 allow:1 fall:1 taking:1 emerge:1 absolute:1 sparse:3 benefit:1 preventing:1 refinement:1 san:1 avoided:1 far:1 pruning:21 approximate:2 compact:2 feat:2 overfitting:1 investigating:1 reestimation:1 corpus:9 summing:2 assumed:1 unnecessary:1 francisco:1 discriminative:63 alternatively:1 search:2 latent:26 table:4 learn:4 nature:1 ca:3 symmetry:1 interact:1 expansion:2 investigated:2 complex:3 did:2 hierarchically:1 main:2 turian:1 plural:1 repeated:4 child:2 fashion:2 slow:1 usefull:1 precision:1 pereira:2 exponential:1 candidate:4 third:1 abundance:1 young:1 list:4 barrett:1 lexicalized:2 restricting:1 pcfg:3 adding:2 merging:10 phd:1 illustrates:1 margin:2 sparser:3 simply:1 likely:2 lbfgs:2 gao:1 expressed:1 partially:1 springer:1 nnps:1 conditional:14 jjs:1 exposition:1 feasible:1 included:1 infinite:1 reducing:3 total:1 pas:1 subdivision:1 indicating:1 select:1 collins:4 phenomenon:1 |
2,578 | 3,338 | Hierarchical Penalization
Marie Szafranski 1 , Yves Grandvalet 1, 2 and Pierre Morizet-Mahoudeaux 1
Heudiasyc 1 , UMR CNRS 6599
Universit?e de Technologie de Compi`egne
BP 20529, 60205 Compi`egne Cedex, France
IDIAP Research Institute 2
Av. des Pr?es-Beudin 20
P.O. Box 592, 1920 Martigny, Switzerland
marie.szafranski@hds.utc.fr
Abstract
Hierarchical penalization is a generic framework for incorporating prior information in the fitting of statistical models, when the explicative variables are organized
in a hierarchical structure. The penalizer is a convex functional that performs soft
selection at the group level, and shrinks variables within each group. This favors
solutions with few leading terms in the final combination. The framework, originally derived for taking prior knowledge into account, is shown to be useful in
linear regression, when several parameters are used to model the influence of one
feature, or in kernel regression, for learning multiple kernels.
Keywords ? Optimization: constrained and convex optimization. Supervised
learning: regression, kernel methods, sparsity and feature selection.
1
Introduction
In regression, we want to explain or to predict a response variable y from a set of explanatory
variables x = (x1 , . . . , xj , . . . , xd ), where y ? R and ?j, xj ? R. For this purpose, we use a model
such that y = f (x) + , where f is a function able to characterize y when x is observed and is a
residual error.
Supervised learning consists in estimating f from the available training dataset S = {(xi , yi )}ni=1 .
It can be achieved in a predictive or a descriptive perspective: to predict accurate responses for future
observations, or to show the correlations that exist between the set of explanatory variables and the
response variable, and thus, give an interpretation to the model.
t
In the linear case, the function f consists of an estimate ? = (?1 , . . . , ?j , . . . , ?d ) applied to x, that
is to say f (x) = x?. In a predictive perspective, x? produces an estimate of y, for any observation
x. In a descriptive perspective, |?j | can be interpreted as a degree of relevance of variable xj .
Ordinary Least Squares (OLS) minimizes the sum of the residual squared error. When the explanatory variables are numerous and many of them are correlated, the variability of the OLS estimate
tends to increase. This leads to reduced prediction accuracy, and an interpretation of the model
becomes tricky.
Coefficient shrinkage is a major approach of regularization procedures in linear regression models.
It overcomes the drawbacks described above by adding a constraint on the norm of the estimate ?.
According to the chosen norm, coefficients associated to variables with little predictive information
may be shrunk, or even removed when variables are irrelevant. This latest case is referred to as
variable selection. In particular, ridge regression shrinks coefficients with regard to the `2 -norm,
while the lasso (Least Absolute Shrinkage and Selection Operator) [1] and the lars (Least Angle
Regression Stepwise) [2] both shrink and remove coefficients using the `1 -norm.
1
l ?2,1 - x1
1|
x1
o
J1
?
?1,1
?
3 x2 ?
?
?
?
?
2,2
?
?
l
|
l
|
3
?
?
1,2
2,3
0
2Q
x ? J2
@
?
Q?2,4
?
?
Q
@
?
?
4
?@
Q
1,3
sx ?
Q
@
?
R|
@
l ?2,5 - x5 ?
?
3Q
Q?2,6
J
Q
? 3
6?
Q
sx
Q
3 x2
l
|
- |
- x3
l
Q
@
Q
Q
@
Q
s x4
Q
@
@
R|
@
- x5
l
Q
Q
Q
Q
s x6
Q
Figure 1: left: toy-example of the original structure of variables; right: equivalent tree structure
considered for the formalization of the scaling problem.
In some applications, explanatory variables that share a similar characteristic can be gathered into
groups ? or factors. Sometimes, they can be organized hierarchically. For instance, in genomics,
where explanatory variables are (products of) genes, some factors can be identified from the prior
information available in the hierarchies of Gene Ontology. Then, it becomes necessary to find
methods that retain meaningful factors instead of individual variables.
Group-lasso and group-lars [3] can be considered as hierarchical penalization methods, with trees of
height two defining the hierarchies. They perform variable selection by encouraging sparseness over
predefined factors. These techniques seem perfectible in the sense that hierarchies can be extended
to more than two levels and sparseness integrated within groups. This papers proposes a penalizer,
derived from an adaptive penalization formulation [4], that highlights factors of interest by balancing
constraints on each element, at each level of a hierarchy. It performs soft selection at the factor level,
and shrinks variables within groups, to favor solutions with few leading terms.
Section 2 introduces the framework of hierarchical penalization and the associated algorithm is
presented in Section 3. Section 4 shows how this framework can be applied to linear and kernel
regression. We conclude with a general survey of our future works.
2
2.1
Hierarchical Penalization
Formalization
We introduce hierarchical penalization by considering problems where the variables are organized
in a tree structure of height two, such as the example displayed in figure 1. The nodes of height
one are labelled in {1, . . . , K}. The set of children (that is, leaves) of node k is denoted Jk and its
cardinality is dk . As displayed on the right-hand-side of figure 1, a branch stemming from the root
and going to node k is labelled by ?1,k , and the branch reaching leaf j is labelled by ?2,j .
We consider the problem of minimizing a differentiable loss function L(?), subject to sparseness
constraints on ? and the subsets of ? defined in a tree hierarchy. This reads
?
K X
X
?
?j2
?
? min
?
,
(1a)
L(?)
+
?
?
?
?
?,?
?1,k ?2,j
?
?
j?J
k=1
k
?
?
?
subject to
?
?
?
?
?
?
?
K
X
d
X
dk ?1,k = 1 ,
?1,k ? 0
?2,j = 1 ,
(1b)
j=1
k=1
k = 1, . . . , K ,
?2,j ? 0
j = 1, . . . , d ,
(1c)
where ? > 0 is a Lagrangian parameter that controls the amount of shrinkage, x/y is defined by
continuation at zero as x/0 = ? if x 6= 0 and 0/0 = 0.
2
The second term of expression (1a) penalizes ?, according to the tree structure, via scaling factors
? 1 and ? 2 . The constraints (1b) shrink the coefficients ? at group level and inside groups. In what
follows, we show that problem (1) is convex and that this joint shrinkage encourages sparsity at the
group level.
2.2
Two important properties
We first prove that the optimization problem (1) is tractable and moreover convex. Then, we show
an equivalence with another optimization problem, which exhibits the exact nature of the constraints
applied to the coefficients ?.
Proposition 1 Provided L(?) is convex, problem (1) is convex.
Proof: A problem minimizing a convex criterion on a convex set is convex. Since L(?) is convex and
2
? is positive, the criterion (1a) is convex provided f (x, y, z) = ?xyz is convex. To show this, we
compute the Hessian:
?
?
? ? ? ?t ? ? ? ?t
8
?4 xy ?4 xz
2
0
0
2
2
2
1
?
x ?? x ?
x? ? x?
x ?
?
?
?
?
+
.
=
2
4(yz) 2 ?2 f (x, y, z) = ??4 xy 3 xy2
y
y
y
y
yz ?
x
x
x
x
x
x2
x2
?
?
?
?
?4 z
3 z2
z
z
z
z
yz
Hence, the Hessian is positive semi-definite, and criterion (1a) is convex.
Next, constraints (1c) define half-spaces for ?1 and ?2 , which are convex sets. Equality constraints
(1b) define linear subspaces of dimension K ? 1 and d ? 1 which are also convex sets. The intersection of convex sets being a convex set, the constraints define a convex admissible set, and problem
(1) is convex.
Proposition 2 Problem (1) is equivalent to
?
? 34 ?2
?
K
X
X
1
4
?
?
dk4 ?
min L(?) + ? ?
|?j | 3 ? ? .
?
(2)
j?Jk
k=1
Sketch of proof:
The Lagrangian of problem (1) is
L = L(?) + ?
K X
X
k=1 j?Jk
?j2
+ ?1
?
?1,k ?2,j
K
X
!
dk ?1,k ? 1
+
k=1
?
?
d
K
d
X
X
X
?2 ?
?2,j ? 1? ?
?1,k ?1,k ?
?2,j ?2,j .
j=1
j=1
k=1
Hence, the optimality conditions for ?1,k and ?2,j are
?
?
?j2
? X
?
?
?L
+ ?1 dk ? ?1,k
?
?
?
3
1
?
?
?
?
2
2
? ??1,k = 0
? 2 j?Jk ?1,k
?2,j
?
?
?
?
?
? ?j2
? ?L
?
= 0
?
?
?
+ ?2 ? ?2,j
?
?
??2,j
2 ? 12 ? 32
1,k 2,j
=
0
.
=
0
After some tedious algebra, the optimality conditions for ?1,k and ?2,j can be expressed as
?3
?1,k
k=1
where sk =
P
1
3
d 4 (sk ) 4
= Kk
,
3
P 14
4
dk (sk )
and ?2,j
4
dk4 |?j | 3
=
for j ? Jk ,
K
1
1 P
3
4
4
4
(sk )
dk (sk )
k=1
4
3
|?j | . Plugging these conditions in criterion (1a) yields the claimed result.
j?Jk
3
2.3
Sparseness
Proposition 2 shows how the penalization influences the groups of variables and each variable in
each group. Note that, thanks to the positivity of the squared term in (2), the expression can be
further simplified to
?
? 43
K
X
X
1
4
min L(?) + ?
(3)
dk4 ?
|?j | 3 ? ,
?
j?Jk
k=1
where, for any L(?), there is a one-to-one mapping from ? in (2) to ? in (3). This expression
can be interpreted as the Lagrangian formulation of a constrained optimization problem, where the
admissible set for ? is defined by the multiplicand of ?.
We display the shape of the admissible set in figure 2, and compare it to ridge regression, which does
not favor sparsity, lasso, which encourages sparsity for all variables but does not take into account
the group structure, and group-lasso, which is invariant to rotations of within-group variables. One
sees that hierarchical penalization combines some features of lasso and group-lasso.
ridge regression
?12 +?22 +?32 ?1
lasso
group-lasso
|?1 |+|?2 |+|?3 |?1
2(
?12 +?22
)
1
2
+|?3 |?1
hierarchical penalization
3
4?
1?
4
2 4 |?1 | 3 +|?2 | 3 4 +|?3 |?1
Figure 2: Admissible sets for various penalties, the two horizontal axes are the (?1 , ?2 ) plane (first group) and
the vertical axis is for ?3 (second group).
By looking at the curvature of these sets when they meet axes, one gets a good intuition on why
ridge regression does not suppress variables, why lasso does, why group-lasso suppresses groups
of variables but not within-group variables, and why hierarchical penalization should do both. This
intuition is however not correct for hierarchical penalization because the boundary of the admissible
set is differentiable in the within-group hyper-plane (?1 , ?2 ) at ?1 = 0 and ?2 = 0. However,
as its curvature is very high, solutions with few leading terms in the within-group variables are
encouraged.
To go beyond the hints provided by these figures, we detail here the optimality conditions for ?
minimizing (3). The first-order optimality conditions are
1. for ?j = 0, j ? Jk and
P
|?j | = 0,
j?Jk
1
?L(?)
+ ? dk4 vj = 0, where vj ? [?1, 1];
??j
?L(?)
= 0;
??j
j?Jk
? 41
1
4
?L(?)
1 X
4
3
3. for ?j 6= 0, j ? Jk ,
+ ? dk sign(?j ) 1 +
|?` |
= 0.
4
??j
|?j | 3 `?Jk
2. for ?j = 0, j ? Jk and
P
|?j | =
6 0,
`6=j
These equations signify respectively that
1. the variables belonging to groups that are estimated to be irrelevant are penalized with the
highest strength, thus limiting the number of groups influencing the solution;
2. when a group has some non-zero relevance, all variables enter the set of active variables
provided they influence the fitting criterion;
3. however, the penalization strength increases very rapidly (as a smooth step function) for
small values of |?j |, thus limiting the number of ?j with large magnitude.
4
Overall, hierarchical penalization is thus expected to provide solutions with few active groups and
few leading variables within each group.
3
Algorithm
To solve problem (3), we use an active set algorithm, based on the approach proposed by Osborne
et al. [5] for the lasso. This algorithm iterates two phases: first, the optimization problem is solved
with a sub-optimal set of active variables, that is, non-zero variables: we define A = {j | ?j 6= 0},
the current active set of variables, ? = {?j }j?A , the vector of coefficients associated to A, and
Gk = {Jk ? A}, the subset of coefficients ? associated to group k. Then, at each iteration, we solve
the problem
?
? 34
K
X
X
1
4
min L(?) = L(?) + ?
dk4 ?
|?j | 3 ? ,
(4)
?
k=1
j?Gk
by alternating steps A and B described below. Second, the set of active variables is incrementally
updated as detailed in steps C and D.
A Compute a candidate update from an admissible vector ?
The goal is to solve min L(? + h), where ? is the current estimate of the solution and h ? R|A| .
h
The difficulties in solving (4) stem from the discontinuities of the derivative due to the absolute
values. These difficulties are circumvented by replacing |?j + hj | by sign(?j )(?j + hj ). This
enables the use of powerful continuous optimizers based either on the Newton, quasi-Newton or
conjugate gradient methods according to the size of the problem.
B Obtain a new admissible vector ? ?
Let ? ? = ? + h. If for all j, sign(?j? ) = sign(?j ), then ? is sign-feasible, and we go to step C,
otherwise:
?m
+
) 6= sign(?m ). Let ? = min ?
B.1 Let S be the set of indices m such that sign(?m
, that is,
m?S
hm
? is the largest step in direction h such that sign(?m + ?hm ) = sign(?m ), except for one
?m
, for which ?` + ?h` = 0.
variable, ` = arg min ?
m
hm
B.2 Set ? = ? + ?h and sign(?` ) = ? sign(?` ), and compute a new direction h as in step A.
If, for the new solution ? ? , sign(?`? ) 6= sign(?` ), then ` is removed from A. Go to step A.
B.3 Iterate step B until ? is sign-feasible.
C Test optimality of ?
If the appropriate optimality condition holds for all inactive variables ?` (?` = 0), that is
1
?L(?)
P
? ? d4 ,
C.1 for ` ? Jk , where
|?j | = 0, then
k
??`
j?Jk
C.2 for ` ? Jk , where
P
|?j | 6= 0, then
j?Jk
?L(?)
= 0,
??`
then ? is a solution. Else, go to step D.
D Select the variable that enters the active set
? 1 ?L(?)
D.1 Select variable `, ` ?
/ A that maximizes dk 4
, where k is the group of variable `.
??`
D.2 Update the active set: A ? A ? {`},
with initial vector: ? = [?, 0]t where the sign of the
?L(?)
new zero component is ? sign ??` .
D.3 Go to step A.
The algorithm is initialized with A = ?, and the first variable is selected with the process described
at step D.
5
4
Experiments
We illustrate on two datasets how hierarchical penalization can be useful in exploratory analysis
and in prediction. Then, we show how the algorithm can be applied for multiple kernel learning in
kernel regression.
4.1
Abalone Database
The Abalone problem [6] consists in predicting the age of abalone from physical measurements.
The dataset is composed of 8 attributes. One concerns the sex of abalone, and has been encoded
sex
sex
with dummy variables, that is xsex
i = (100) for male, xi = (010) for female, or xi = (001) for
infant. This variable defines the first group. The second group is composed of 3 attributes concerning
size parameters (length, diameter and height), and the last group is composed of weight parameters
(whole, shucked, viscera and shell weight).
We randomly selected 2920 examples for training, including the tuning of ? by 10-fold cross validation, and left the 1257 other for testing. The mean squared test error is at par with lasso (4.3).
The coefficients estimated on the training set are reported in table 4.1. Weight parameters are a main
contributor to the estimation of the age of an abalon, while sex is not essential, except for infant.
sex
size
weight
0.051
-0.044
4.370
0.036
1.134
-4.499
-0.360
0.358
-1.110
1.399
0.516
1.7405
11.989
1
0
Table 1: Coefficients obtained on the Abalone dataset. The last column represents the value dk4 @
4.2
P
4
13
4
|?j | 3 A
.
j?Jk
Delve Census Database
The Delve Census problem [7] consists in predicting the median price of a house in different survey
regions. Each 22732 survey region is represented by 134 demographic information measurements.
Several prototypes are available. We focussed on the prototype ?house-price-16L?, composed of 16
variables. We derived this prototype by including all the other variables related to these 16 variables.
The final dataset is then composed of 37 variables, split up into 10 groups1 .
We randomly selected 8000 observations for training and left the 14732 for testing. We divided
the training observations into 10 distinct datasets. For each dataset, the parameter ? was selected
by a 10-fold cross validation, and the mean squared error was computed on the testing set. We
reported on table 4.2 the mean squared test errors obtained with the hierarchical penalization (hp),
the group-lasso (gl) and the lasso estimates.
Datasets
1
2
3
4
5
6
7
8
9
10
mean error
hp (?109 ) 2.363 2.745 2.289 4.481 2.211 2.364 2.460 2.298 2.461 2.286
2.596
gl (?109 )
2.429 2.460 2.289 4.653 2.230 2.364 2.472 2.308 2.454 2.291
2.595
lasso (?109 ) 2.380 2.716 2.293 4.656 2.216 2.368 2.490 2.295 2.483 2.288
2.618
Table 2: Mean squared test errors obtained with different methods for the 10 datasets.
Hierarchical penalization performs better than lasso on 8 datasets. It also performs better than
group-lasso on 6 datasets, and obtains equal results on 2 datasets. However the lowest overall mean
error is achieved by group-lasso.
4.3
Multiple Kernel Learning
Multiple Kernel Learning has drawn much interest in classification with support vector machines
(SVMs) starting from the work of Lanckriet et al. [8]. The problem consists in learning a convex
1
A description of the dataset is available at http://www.hds.utc.fr/?mszafran/nips07/.
6
combination of kernels in the SVM optimization algorithm. Here, we show that hierarchical penalization is well suited for this purpose for other kernel predictors, and we illustrate its effect on kernel
smoothing in the regression setup.
Kernel smoothing has been studied in nonparametric statistics since the 60?s [9]. Here, we consider
the model where the response variable y is estimated by a sum of kernel functions
yi =
n
X
?j ?h (xi , xj ) + i ,
j=1
where ?h is the kernel with scale factor (or bandwidth) h, and i is a residual error. For the purpose
of combining K bandwidths, the general criterion (3) reads
?
?2
?
? 34
n
K X
n
K
n
X
X
X
X
1
4
?yi ?
min
?k,j ?hk (xi , xj )? + ?
(5)
nk4 ?
|?k,j | 3 ? .
{? k }K
k=1
i=1
k=1 j=1
k=1
j=1
The penalized model (5) has been applied to the motorcycle dataset [9]. This uni-dimensional problems enables to display the contribution of each bandwidth to the solution. We used Gaussian
kernels, with 7 bandwidths ranging from 10?1 to 102 .
Figure 3 displays the results obtained for different penalization parameters: the estimated function
obtained by the combination of the selected bandwidths, and the contribution of each bandwidth to
the model. We display three settings for the penalization parameter ?, corresponding to slight overfitting, good fit and slight under-fitting. The coefficients of bandwidths h2 , h6 and h7 were always
null and are thus not displayed. As expected, when the penalization parameter ? increases, the fit
becomes smoother, and the number of contributing bandwidths decreases. We also observe that the
effective contribution of some bandwidths is limited to a few kernels: there are few leading terms in
the expansion.
5
Conclusion and further works
Hierarchical penalization is a generic framework enabling to process hierarchically structured variables by usual statistical models. The structure is provided to the model via constraints on the
subgroups of variables defined at each level of the hierarchy. The fitted model is then biased towards statistical explanations that are ?simple? with respect to this structure, that is, solutions which
promote a small number of groups of variables, with a few leading components.
In this paper, we detailed the general framework of hierarchical penalization for tree structures of
height two, and discussed its specific properties in terms of convexity and parsimony. Then, we
proposed an efficient active set algorithm that incrementally builds an optimal solution to the problem. We finally illustrated how the approach can be used when groups of features, or when discrete
variables exist, after being encoded by several binary variables, result in groups of variables. Finally, we also shown how the algorithm can be used to learn from multiple kernels in regression. We
are now performing quantitative empirical evaluations, with applications to regression, classification
and clustering, and comparisons to other regularization schemes, such as the group-lasso.
We then plan to extend the formalization to hierarchies of arbitrary height, whose properties are
currently under study. We will then be able to tackle new applications, such as genomics, where the
available gene ontologies are hierarchical structures that can be faithfully approximated by trees.
References
[1] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B, 58(1):267?288, 1996.
[2] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2004.
[3] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society. Series B, 68(1):49?67, 2006.
7
combined
h1 =10?1
h3 =1
1
h4 =10 2
h5 =10
?=10
?=25
?=50
Figure 3: Hierarchical penalization applied to kernel smoothing on the motorcycle data. Combined: the points
represent data and the solid line the function of estimated responses. Isolated bandwidths: the points represent
partial residuals and the solid line represents the contribution of the bandwidth to the model.
[4] Y. Grandvalet and S. Canu. Adaptive scaling for feature selection in SVMs. In Advances in
Neural Information Processing Systems, volume 15. MIT Press, 2003.
[5] M. R. Osborne, B. Presnell, and B. A. Turlach. On the lasso and its dual. Journal of Computational and Graphical Statistics, 9(2):319?337, June 2000.
[6] C.L. Blake D.J. Newman, S. Hettich and C.J. Merz. UCI repository of machine learning
databases, 1998. URL http://www.ics.uci.edu/?mlearn/MLRepository.html.
[7] Delve: Data for evaluating learning in valid experiments. URL http://www.cs.toronto.
edu/?delve/.
[8] G. Lanckriet, T. De Bie, N. Cristianini, M. Jordan, and W. Noble. A statistical framework for
genomic data fusion. Bioinformatics, 20:2626?2635, 2004.
[9] W. H?ardle. Applied Nonparametric Regression, volume 19. Economic Society Monographs,
1990.
8
| 3338 |@word repository:1 norm:4 turlach:1 tedious:1 sex:5 solid:2 initial:1 series:2 current:2 z2:1 mahoudeaux:1 bie:1 stemming:1 j1:1 shape:1 enables:2 remove:1 update:2 infant:2 half:1 leaf:2 selected:5 plane:2 egne:2 iterates:1 node:3 toronto:1 height:6 h4:1 yuan:1 consists:5 prove:1 fitting:3 combine:1 inside:1 introduce:1 expected:2 ontology:2 xz:1 utc:2 little:1 encouraging:1 considering:1 cardinality:1 becomes:3 provided:5 estimating:1 moreover:1 maximizes:1 lowest:1 what:1 null:1 interpreted:2 minimizes:1 parsimony:1 suppresses:1 quantitative:1 xd:1 tackle:1 universit:1 tricky:1 control:1 positive:2 influencing:1 tends:1 meet:1 umr:1 studied:1 equivalence:1 delve:4 limited:1 testing:3 definite:1 x3:1 optimizers:1 procedure:1 empirical:1 get:1 selection:9 operator:1 influence:3 www:3 equivalent:2 szafranski:2 lagrangian:3 penalizer:2 latest:1 go:5 starting:1 convex:20 hds:2 survey:3 exploratory:1 limiting:2 updated:1 hierarchy:7 annals:1 exact:1 lanckriet:2 element:1 approximated:1 jk:19 database:3 observed:1 solved:1 enters:1 region:2 decrease:1 removed:2 highest:1 monograph:1 intuition:2 convexity:1 technologie:1 cristianini:1 solving:1 algebra:1 predictive:3 joint:1 multiplicand:1 various:1 represented:1 distinct:1 effective:1 newman:1 hyper:1 whose:1 encoded:2 solve:3 say:1 otherwise:1 favor:3 statistic:3 final:2 descriptive:2 differentiable:2 product:1 fr:2 j2:5 uci:2 combining:1 motorcycle:2 rapidly:1 description:1 produce:1 illustrate:2 h3:1 keywords:1 idiap:1 c:1 switzerland:1 direction:2 drawback:1 correct:1 attribute:2 shrunk:1 lars:2 proposition:3 ardle:1 hold:1 considered:2 blake:1 ic:1 mapping:1 predict:2 major:1 purpose:3 estimation:2 currently:1 contributor:1 largest:1 grouped:1 faithfully:1 mit:1 genomic:1 perfectible:1 gaussian:1 always:1 reaching:1 hj:2 shrinkage:5 heudiasyc:1 derived:3 ax:2 june:1 hk:1 sense:1 cnrs:1 integrated:1 explanatory:5 going:1 france:1 quasi:1 arg:1 overall:2 dual:1 classification:2 denoted:1 html:1 proposes:1 plan:1 constrained:2 smoothing:3 equal:1 encouraged:1 x4:1 represents:2 promote:1 noble:1 future:2 hint:1 few:8 randomly:2 composed:5 individual:1 phase:1 interest:2 evaluation:1 introduces:1 male:1 predefined:1 accurate:1 partial:1 necessary:1 xy:2 tree:7 penalizes:1 initialized:1 isolated:1 fitted:1 instance:1 column:1 soft:2 ordinary:1 subset:2 predictor:1 xy2:1 characterize:1 reported:2 combined:2 thanks:1 retain:1 squared:6 positivity:1 derivative:1 leading:6 toy:1 account:2 de:4 coefficient:11 nips07:1 root:1 h1:1 contribution:4 yves:1 ni:1 square:1 accuracy:1 characteristic:1 gathered:1 yield:1 mlearn:1 explain:1 associated:4 proof:2 dataset:7 knowledge:1 efron:1 organized:3 originally:1 supervised:2 x6:1 response:5 formulation:2 box:1 shrink:5 correlation:1 until:1 hand:1 sketch:1 horizontal:1 replacing:1 incrementally:2 defines:1 effect:1 regularization:2 hence:2 equality:1 read:2 alternating:1 illustrated:1 x5:2 encourages:2 mlrepository:1 d4:1 criterion:6 abalone:5 ridge:4 performs:4 ranging:1 ols:2 rotation:1 functional:1 physical:1 volume:2 extend:1 discussed:1 interpretation:2 slight:2 measurement:2 enter:1 tuning:1 canu:1 hp:2 curvature:2 female:1 perspective:3 irrelevant:2 claimed:1 binary:1 yi:3 semi:1 branch:2 multiple:5 smoother:1 stem:1 smooth:1 h7:1 cross:2 lin:1 divided:1 concerning:1 plugging:1 prediction:2 regression:19 iteration:1 represent:2 kernel:18 sometimes:1 achieved:2 want:1 signify:1 else:1 median:1 biased:1 cedex:1 subject:2 seem:1 jordan:1 split:1 iterate:1 xj:5 fit:2 hastie:1 lasso:21 identified:1 bandwidth:11 economic:1 prototype:3 inactive:1 expression:3 url:2 presnell:1 penalty:1 hessian:2 useful:2 detailed:2 amount:1 nonparametric:2 svms:2 diameter:1 reduced:1 continuation:1 http:3 exist:2 sign:16 estimated:5 dummy:1 tibshirani:2 discrete:1 group:41 drawn:1 marie:2 sum:2 angle:2 powerful:1 hettich:1 scaling:3 display:4 fold:2 strength:2 constraint:9 bp:1 x2:4 min:8 optimality:6 performing:1 circumvented:1 structured:1 according:3 combination:3 belonging:1 conjugate:1 explicative:1 invariant:1 pr:1 census:2 equation:1 xyz:1 tractable:1 demographic:1 available:5 h6:1 observe:1 hierarchical:20 generic:2 appropriate:1 pierre:1 original:1 clustering:1 graphical:1 newton:2 yz:3 build:1 society:3 usual:1 exhibit:1 gradient:1 subspace:1 length:1 index:1 kk:1 minimizing:3 setup:1 gk:2 martigny:1 suppress:1 perform:1 av:1 observation:4 vertical:1 datasets:7 enabling:1 displayed:3 defining:1 extended:1 variability:1 looking:1 arbitrary:1 beudin:1 subgroup:1 discontinuity:1 able:2 beyond:1 below:1 sparsity:4 including:2 royal:2 explanation:1 difficulty:2 predicting:2 residual:4 scheme:1 numerous:1 axis:1 hm:3 genomics:2 prior:3 contributing:1 loss:1 par:1 highlight:1 age:2 penalization:24 validation:2 h2:1 degree:1 grandvalet:2 share:1 balancing:1 penalized:2 gl:2 last:2 side:1 institute:1 johnstone:1 taking:1 focussed:1 absolute:2 regard:1 boundary:1 dimension:1 evaluating:1 valid:1 adaptive:2 simplified:1 obtains:1 uni:1 overcomes:1 gene:3 active:9 overfitting:1 conclude:1 xi:5 continuous:1 sk:5 why:4 table:4 nature:1 learn:1 expansion:1 vj:2 hierarchically:2 main:1 whole:1 morizet:1 osborne:2 child:1 x1:3 referred:1 formalization:3 sub:1 candidate:1 house:2 admissible:7 specific:1 dk:8 svm:1 concern:1 fusion:1 incorporating:1 stepwise:1 essential:1 adding:1 compi:2 magnitude:1 sparseness:4 sx:2 suited:1 intersection:1 expressed:1 shell:1 goal:1 towards:1 labelled:3 price:2 feasible:2 except:2 e:1 merz:1 meaningful:1 select:2 support:1 relevance:2 bioinformatics:1 h5:1 correlated:1 |
2,579 | 3,339 | Support Vector Machine Classification
with Indefinite Kernels
Ronny Luss
ORFE, Princeton University
Princeton, NJ 08544
rluss@princeton.edu
Alexandre d?Aspremont
ORFE, Princeton University
Princeton, NJ 08544
aspremon@princeton.edu
Abstract
In this paper, we propose a method for support vector machine classification using
indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss
function, our method simultaneously finds the support vectors and a proxy kernel
matrix used in computing the loss. This can be interpreted as a robust classification
problem where the indefinite kernel matrix is treated as a noisy observation of the
true positive semidefinite kernel. Our formulation keeps the problem convex and
relatively large problems can be solved efficiently using the analytic center cutting
plane method. We compare the performance of our technique with other methods
on several data sets.
1 Introduction
Here, we present an algorithm for support vector machine (SVM) classification using indefinite kernels. Our interest in indefinite kernels is motivated by several observations. First, certain similarity
measures take advantage of application-specific structure in the data and often display excellent
empirical classification performance. Unlike popular kernels used in support vector machine classification, these similarity matrices are often indefinite and so do not necessarily correspond to a
reproducing kernel Hilbert space (see [1] for a discussion).
An application of classification with indefinite kernels to image classification using Earth Mover?s
Distance was discussed in [2]. Similarity measures for protein sequences such as the SmithWaterman and BLAST scores are indefinite yet have provided hints for constructing useful positive
semidefinite kernels such as those decribed in [3] or have been transformed into positive semidefinite
kernels (see [4] for example). Here instead, our objective is to directly use these indefinite similarity
measures for classification.
Our work also closely follows recent results on kernel learning (see [5] or [6]), where the kernel
matrix is learned as a linear combination of given kernels, and the resulting kernel is explicitly
constrained to be positive semidefinite (the authors of [7] have adapted the SMO algorithm to solve
the case where the kernel is written as a positively weighted combination of other kernels). In our
case however, we never explicitly optimize the kernel matrix because this part of the problem can be
solved explicitly, which means that the complexity of our method is substantially lower than that of
classical kernel learning methods and closer in spirit to the algorithm used in [8], who formulate the
multiple kernel learning problem of [7] as a semi-infinite linear program and solve it with a column
generation technique similar to the analytic center cutting plane method we use here.
Finally, it is sometimes impossible to prove that some kernels satisfy Mercer?s condition or the
numerical complexity of evaluating the exact positive semidefinite kernel is too high and a proxy
(and not necessarily positive semidefinite) kernel has to be used instead (see [9] for example). In
both cases, our method allows us to bypass these limitations.
1
1.1 Current results
Several methods have been proposed for dealing with indefinite kernels in SVMs. A first direction
embeds data in a pseudo-Euclidean (pE) space: [10] for example, formulates the classification problem with an indefinite kernel as that of minimizing the distance between convex hulls formed from
the two categories of data embedded in the pE space. The nonseparable case is handled in the same
manner using reduced convex hulls (see [11] for a discussion of SVM geometric interpretations).
Another direction applies direct spectral transformations to indefinite kernels: flipping the negative eigenvalues or shifting the kernel?s eigenvalues and reconstructing the kernel with the original
eigenvectors in order to produce a positive semidefinite kernel (see [12] and [2]).
Yet another option is to reformulate either the maximum margin problem or its dual in order to
use the indefinite kernel in a convex optimization problem (see [13]). An equivalent formulation
of SVM with the same objective but where the kernel appears in the constraints can be modified
to a convex problem by eliminating the kernel from the objective. Directly solving the nonconvex
problem sometimes gives good results as well (see [14] and [10]).
1.2 Contribution
Here, instead of directly transforming the indefinite kernel, we simultaneously learn the support vector weights and a proxy positive semidefinite kernel matrix, while penalizing the distance between
this proxy kernel and the original, indefinite one. Our main result is that the kernel learning part of
that problem can be solved explicitly, meaning that the classification problem with indefinite kernels
can simply be formulated as a perturbation of the positive semidefinite case.
Our formulation can also be interpreted as a worst-case robust classification problem with uncertainty on the kernel matrix. In that sense, indefinite similarity matrices are seen as noisy observations of an unknown positive semidefinite kernel. From a complexity standpoint, while the original
SVM classification problem with indefinite kernel is nonconvex, the robustification we detail here is
a convex problem, and hence can be solved efficiently with guaranteed complexity bounds.
The paper is organized as follows. In Section 2 we formulate our main classification problem and
detail its interpretation as a robust SVM. In Section 3 we describe an algorithm for solving this
problem. Finally, in Section 4, we test the numerical performance of these methods on various
applications.
2 SVM with indefinite kernels
Here, we introduce our robustification of the SVM classification problem with indefinite kernels.
2.1 Robust classification
Let K ? Sn be a given kernel matrix and y ? Rn be the vector of labels, with Y = diag(y) the
matrix with diagonal y, where Sn is the set of symmetric matrices of size n and Rn is the set of
n-vectors of real numbers. We can write the dual of the SVM classification problem with hinge loss
and quadratic penalty as:
maximize ?T e ? Tr(K(Y ?)(Y ?)T )/2
subject to ?T y = 0
0???C
(1)
in the variable ? ? Rn and where e is an n-vector of ones. When K is positive semidefinite, this
problem is a convex quadratic program. Suppose now that we are given an indefinite kernel matrix
K0 ? Sn . We formulate a robust version of problem (1) by restricting K to be a positive semidefinite
kernel matrix in some given neighborhood of the original (indefinite) kernel matrix K0 :
1
(2)
max
min
?T e ? Tr(K(Y ?)(Y ?)T )
2
{?T y=0, 0???C} {K0, kK?K0 k2F ??}
in the variables K ? Sn and ? ? Rn , where the parameter ? > 0 controls the distance between
the original matrix K0 and the proxy kernel K. This can be interpreted as a worst-case robust
2
classification problem with bounded uncertainty on the kernel matrix K. The above problem is
infeasible for some values of ? so we replace here the hard constraint on K by a penalty on the
distance between the proxy positive semidefinite kernel and the given indefinite matrix. The problem
we solve is now:
1
max
min ?T e ? Tr(K(Y ?)(Y ?)T ) + ?kK ? K0 k2F
(3)
2
{?T y=0,0???C} {K0}
in the variables K ? Sn and ? ? Rn , where the parameter ? > 0 controls the magnitude of the
penalty on the distance between K and K0 . The inner minimization problem is a convex conic
program on K. Also, as the pointwise minimum of a family of concave quadratic functions of ?, the
solution to the inner problem is a concave function of ?, and hence the outer optimization problem
is also convex (see [15] for further details). Thus, (3) is a concave maximization problem subject to
linear constraints and is therefore a convex problem in ?.
Our key result here is that the inner kernel learning optimization problem can be solved in closed
form. For a fixed ?, the inner minimization problem is equivalent to the following problem:
minimize kK ? (K0 +
subject to K 0
1
4? (Y
?)(Y ?)T )k2F
in the variable K ? Sn . This is the projection of the K0 + (1/4?)(Y ?)(Y ?)T on the cone of
positive semidefinite matrices. The optimal solution to this problem is then given by:
K ? = (K0 + (1/4?)(Y ?)(Y ?)T )+
(4)
P
T
where X+ is the positive part of the matrix X, i.e. X+ = i max(0, ?i )xi xi where ?i and xi are
the ith eigenvalue and eigenvector of the matrix X. Plugging this solution into (3), we get:
1
max
?T e ? Tr(K ? (Y ?)(Y ?)T ) + ?kK ? ? K0 k2F
2
{?T y=0, 0???C}
in the variable ? ? Rn , where (Y ?)(Y ?)T is the rank one matrix with coefficients yi ?i ?j yj ,
i, j = 1, . . . , n. We can rewrite this as an eigenvalue optimization problem by using the eigenvalue
representation of X+ . Letting the eigenvalue decomposition of K0 +(1/4?)(Y ?)(Y ?)T be V DV T ,
we get K ? = V D+ V T and, with vi the ith column of V , we can write:
Tr(K ? (Y ?)(Y ?)T ) =
=
(Y ?)T V D+ V T (Y ?)
n
X
1
T
(?T Y vi )2
max 0, ?i K0 + (Y ?)(Y ?)
4?
i=1
where ?i (X) is the ith eigenvalue of the quantity X. Using the same technique, we can also rewrite
the term kK ? ? K0 |2F using this eigenvalue decomposition. Our original optimization problem (3)
finally becomes:
P
maximize ?T e ? 12 i max(0, ?i (K0 + (Y ?)(Y ?)T /4?))(?T Y vi )2
P
+? i (max(0, ?i (K0 + (Y ?)(Y ?)T /4?)))2
(5)
P
?2? i Tr((vi viT )K0 )max(0, ?i (K0 + (Y ?)(Y ?)T /4?)) + ? Tr(K0 K0 )
subject to
?T y = 0, 0 ? ? ? C
in the variable ? ? Rn .
2.2 Dual problem
Because problem (3) is convex with at least one compact feasible set, we can formulate the dual
problem to (5) by simply switching the max and the min. The inner maximization is a quadratic
program in ?, and hence has a quadratic program as its dual. We then get the dual by plugging this
inner dual quadratic program into the outer minimization, to get the following problem:
minimize
Tr(K ?1 (Y (e ? ? + ? + y?))(Y (e ? ? + ? + y?))T )/2 + C?T e + ?kK ? K0 k2F
subject to
K 0, ?, ? ? 0
(6)
3
in the variables K ? Sn , ?, ? ? Rn and ? ? R. This dual problem is a quadratic program in the
variables ? and ? which correspond to the primal constraints 0 ? ? ? C and ? which is the dual
variable for the constraint ?T y = 0. As we have seen earlier, any feasible solution to the primal
problem produces a corresponding kernel in (4), and plugging this kernel into the dual problem in
(6) allows us to calculate a dual feasible point by solving a quadratic program which gives a dual
objective value, i.e. an upper bound on the optimum of (5). This bound can then be used to compute
a duality gap and track convergence.
2.3 Interpretation
We noted that our problem can be viewed as a worst-case robust classification problem with uncertainty on the kernel matrix. Our explicit solution of the optimal worst-case kernel given in (4) is the
projection of a penalized rank-one update to the indefinite kernel on the cone of positive semidefinite
matrices. As ? tends to infinity, the rank-one update has less effect and in the limit, the optimal kernel is the kernel given by zeroing out the negative eigenvalues of the indefinite kernel. This means
that if the indefinite kernel contains a very small amount of noise, the best positive semidefinite
kernel to use with SVM in our framework is the positive part of the indefinite kernel.
This limit as ? tends to infinity also motivates a heuristic for the transformation of the kernel on
the testing set. Since the negative eigenvalues of the training kernel are thresholded to zero in the
limit, the same transformation should occur for the test kernel. Hence, we update the entries of the
full kernel corresponding to training instances by the rank-one update resulting from the optimal
solution to (3) and threshold the negative eigenvalues of the full kernel matrix to zero. We then use
the test kernel values from the resulting positive semidefinite matrix.
3 Algorithms
We now detail two algorithms that can be used to solve Problem (5). The optimization problem is
the maximization of a nondifferentiable concave function subject to convex constraints. An optimal
point always exists since the feasibility set is bounded and nonempty. For numerical stability, in both
algorithms, we quadratically smooth our objective to calculate a gradient instead. We first describe
a simple projected gradient method which has numerically cheap iterations but has no convergence
bound. We then show how to apply the much more efficient analytic center cutting plane method
whose iterations are slightly more complex but which converges linearly.
Smoothing Our objective contains terms of the form max{0, f (x)} for some function f (x), which
are not differentiable (described in the section below). These functions are easily smoothed out by
a regularization technique (see [16] for example). We replace them by a continuously differentiable
?
2 -approximation as follows:
?
?? (f (x)) = max (uf (x) ? u2 ).
0?u?1
2
and the gradient is given by ??? (f (x)) = u? (x)?f (x) where u? (x) = argmax ?? (f (x)).
Gradient Calculating the gradient of our objective requires a full eigenvalue decomposition to
compute the gradient of each eigenvalue. Given a matrix X(?), the derivative of the ith eigenvalue
with respect to ? is given by:
??i (X(?))
?X(?)
= viT
vi
(7)
??
??
where vi is the ith eigenvector of X(?). We can then combine this expression with the smooth
approximation above to get the gradient.
We note that eigenvalues of symmetric matrices are not differentiable when some of them have multiplicities greater than one (see [17] for a discussion). In practice however, most tested kernels were
of full rank with distinct eigenvalues so we ignore this issue here. One may also consider projected
subgradient methods, which are much slower, or use subgradients for analytic center cutting plane
methods (which does not affect complexity).
4
3.1 Projected gradient method
The projected gradient method takes a steepest descent, then projects the new point back onto the
feasible region (see [18] for example). In order to use these methods the objective function must be
differentiable and the method is only efficient if the projection step is numerically cheap. We choose
an initial point ?0 ? Rn and the algorithm proceeds as follows:
Projected gradient method
1. Compute ?i+1 = ?i + t?f (?i ).
2. Set ?i+1 = pA (?i+1 ).
3. If gap ? ? stop, otherwise go back to step 1.
The complexity of each iteration breaks down as follows.
Step 1. This requires an eigenvalue decomposition and costs O(n3 ). We note that a line search would
be costly because it would require multiple eigenvalue decompositions to recalculate the objective
multiple times.
Step 2. This is a projection onto the region A = {?T y = 0, 0 ? ? ? C} and can be solved
explicitly by sorting the vector of entries, with cost O(n log n).
Stopping Criterion. We can compute a duality gap using the results of ?2.2: let Ki = (K0 +
(Y ?i )(Y ?i )T /4?)+ (the kernel at iteration i), then solving problem (1) which is just an SVM with
a convex kernel Ki produces an upper bound on (5), and hence a bound on the suboptimality of the
current solution.
Complexity. The number of iterations required by this method to reach a target precision of ? is
typically in O(1/?2 ).
3.2 Analytic center cutting plane method
The analytic center cutting plane method (ACCPM) reduces the feasible region on each iteration
using a new cut of the feasible region computed by evaluating a subgradient of the objective function
at the analytic center of the current set, until the volume of the reduced region converges to the target
precision. This method does not require differentiability. We set A0 = {?T y = 0, 0 ? ? ? C}
which we can write as {A0 ? b0 } to be our first localization set for the optimal solution. The
method then works as follows (see [18] for a more complete reference on cutting plane methods):
Analytic center cutting plane method
1. Compute ?i as the analytic center of Ai by solving:
m
X
?i+1 = argmin ?
log(bi ? aTi y)
n
y?R
i=1
where aTi represents the ith row of coefficients from the left-hand side of {A0 ? b0 }.
2. Compute ?f (x) at the center ?i+1 and update the (polyhedral) localization set:
Ai+1 = Ai ? {?f (?i+1 )(? ? ?i+1 ) ? 0}
3. If gap ? ? stop, otherwise go back to step 1.
The complexity of each iteration breaks down as follows.
Step 1. This step computes the analytic center of a polyhedron and can be solved in O(n3 ) operations
using interior point methods for example.
5
Step 2. This simply updates the polyhedral description.
Stopping Criterion. An upper bound is computed by maximizing a first order Taylor approximation
of f (?) at ?i over all points in an ellipsoid that covers Ai , which can be done explicitly.
Complexity. ACCPM is provably convergent in O(n log(1/?)2 ) iterations when using cut elimination which keeps the complexity of the localization set bounded. Other schemes are available with
slightly different complexities: an O(n2 /?2 ) is achieved in [19] using (cheaper) approximate centers
for example.
4 Experiments
In this section we compare the generalization performance of our technique to other methods of
applying SVM classification given an indefinite similarity measure. We also test SVM classification
performance on positive semidefinite kernels using the LIBSVM library. We finish with experiments
showing convergence of our algorithms. Our algorithms were implemented in Matlab.
4.1 Generalization
We compare our method for SVM classification with indefinite kernels to several of the kernel preprocessing techniques discussed earlier. The first three techniques perform spectral transformations
on the indefinite kernel. The first, denoted denoise, thresholds the negative eigenvalues to zero. The
second transformation, called flip, takes the absolute value of all eigenvalues. The last transformation, shift, adds a constant to each eigenvalue making them all positive. See [12] for further details.
We finally also compare with using SVM on the original indefinite kernel (SVM converges but the
solution is only a stationary point and is not guaranteed to be optimal).
We experiment on data from the USPS handwritten digits database (described in [20]) using the
indefinite Simpson score (SS) to compare two digits and on two data sets from the UCI repository
(see [21]) using the indefinite Epanechnikov (EP) kernel. The data is randomly divided into training
and testing data. We apply 5-fold cross validation and use an accuracy measure (described below)
to determine the optimal parameters C and ?. We then train a model with the full training set and
optimal parameters and test on the independent test set.
Table 1: Statistics for various data sets.
Data Set
# Train
# Test
?min
?max
USPS-3-5-SS
767
773
-34.76
453.58
USPS-4-6-SS
829
857
-37.30
413.17
Diabetes-EP
614
154
-0.27
18.17
Liver-EP
276
69
-1.38e-15
3.74
Table 1 provides statistics including the minimum and maximum eigenvalues of the training kernels.
The main observation is that the USPS data uses highly indefinite kernels while the UCI data use
kernels that are nearly positive semidefinite. Table 2 displays performance by comparing accuracy
and recall. Accuracy is defined as the percentage of total instances predicted correctly. Recall is the
percentage of true positives that were correctly predicted positive.
Our method is referred to as Indefinite SVM. We see that our method performs favorably among
the USPS data. Both measures of performance are quite high for all methods. Our method does
not perform as well on the UCI data sets but is still favorable on one of the measures in each
experiment. Notice though that recall is not good in the liver data set overall which could be the
result of overfitting one of the classification categories. The liver data set uses a kernel that is almost
positive semidefinite - this is an example where the input is almost a true kernel and Indefinite
SVM finds one slightly better. We postulate that our method will perform better in situations where
the similarity measure is highly indefinite as in the USPS dataset, while measures that are almost
positive semidefinite maybe be seen as having a small amount of noise.
6
Table 2: Performance Measures for various data sets.
Data Set
USPS-3-5-SS
USPS-4-6-SS
Diabetes-EP
Liver-EP
Measure
Denoise
Flip
Shift
SVM
Indefinite SVM
Accuracy
95.47
95.73
90.43
74.90
96.25
Recall
94.50
95.45
92.11
72.73
96.65
Accuracy
97.78
97.90
94.28
90.08
97.90
Recall
98.42
98.65
93.68
88.49
98.87
Accuracy
75.32
74.68
68.83
75.32
68.83
Recall
90.00
90.00
92.00
90.00
95.00
Accuracy
63.77
63.77
57.97
63.77
65.22
Recall
22.58
22.58
25.81
22.58
22.58
4.2 Algorithm Convergence
We ran our two algorithms on data sets created by randomly perturbing the four USPS data sets used
above. The average results with one standard deviation above and below the mean are displayed in
Figure 1 with the duality gap in log scale (note that the codes were not stopped here and that the
target gap improvement is usually much smaller than 10?8 ). As expected, ACCPM converges much
faster (in fact linearly) to a higher precision while each iteration requires solving a linear program
of size n. The gradient projection method converges faster in the beginning but stalls at a higher
precision, however each iteration only requires sorting the current point.
4
1
10
10
3
10
2
0
10
Duality Gap
Duality Gap
10
1
10
0
10
?1
10
?2
?1
10
?2
10
10
?3
10
?4
10
?3
0
50
100
150
10
200
Iteration
0
200
400
600
800
1000
Iteration
Figure 1: Convergence plots for ACCPM (left) & projected gradient method (right) on randomly perturbed
USPS data sets (average gap versus iteration number, dashed lines at plus and minus one standard deviation).
5 Conclusion
We have proposed a technique for incorporating indefinite kernels into the SVM framework without any explicit transformations. We have shown that if we view the indefinite kernel as a noisy
instance of a true kernel, we can learn an explicit solution for the optimal kernel with a tractable
convex optimization problem. We give two convergent algorithms for solving this problem on relatively large data sets. Our initial experiments show that our method can at least fare comparably
with other methods handling indefinite kernels in the SVM framework but provides a much clearer
interpretation for these heuristics.
7
References
[1] C. S. Ong, X. Mary, S. Canu, and A. J. Smola. Learning with non-positive kernels. Proceedings of the
21st International Conference on Machine Learning, 2004.
[2] A. Zamolotskikh and P. Cunningham. An assessment of alternative strategies for constructing emd-based
kernel functions for use in an svm for image classification. Technical Report UCD-CSI-2007-3, 2004.
[3] H. Saigo, J. P. Vert, N. Ueda, and T. Akutsu. Protein homology detection using string alignment kernels.
Bioinformatics, 20(11):1682?1689, 2004.
[4] G. R. G. Lanckriet, N. Cristianini, M. I. Jordan, and W. S. Noble. Kernel-based integration of genomic
data using semidefinite programming. 2003. citeseer.ist.psu.edu/648978.html.
[5] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research, 5:27?72, 2004.
[6] C. S. Ong, A. J. Smola, and R. C. Williamson. Learning the kernel with hyperkernels. Journal of Machine
Learning Research, 6:1043?1071, 2005.
[7] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the smo
algorithm. Proceedings of the 21st International Conference on Machine Learning, 2004.
[8] S. Sonnenberg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. Journal of
Machine Learning Research, 7:1531?1565, 2006.
[9] Marco Cuturi. Permanents, transport polytopes and positive definite kernels on histograms. Proceedings
of the Twentieth International Joint Conference on Artificial Intelligence, 2007.
[10] B. Haasdonk. Feature space interpretation of svms with indefinite kernels. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 27(4), 2005.
[11] K. P. Bennet and E. J. Bredensteiner. Duality and geometry in svm classifiers. Proceedings of the 17th
International conference on Machine Learning, pages 57?64, 2000.
[12] G. Wu, E. Y. Chang, and Z. Zhang. An analysis of transformation on non-positive semidefinite similarity
matrix for kernel machines. Proceedings of the 22nd International Conference on Machine Learning,
2005.
[13] H.-T. Lin and C.-J. Lin. A study on sigmoid kernel for svm and the training of non-psd kernels by
smo-type methods. 2003.
[14] A. Wo?znica, A. Kalousis, and M. Hilario. Distances and (indefinite) kernels for set of objects. Proceedings
of the 6th International Conference on Data Mining, pages 1151?1156, 2006.
[15] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[16] C. Gigola and S. Gomez. A regularization method for solving the finite convex min-max problem. SIAM
Journal on Numerical Analysis, 27(6):1621?1634, 1990.
[17] M. Overton. Large-scale optimization of eigenvalues. SIAM Journal on Optimization, 2(1):88?120, 1992.
[18] D. Bertsekas. Nonlinear Programming, 2nd Edition. Athena Scientific, 1999.
[19] J.-L. Goffin and J.-P. Vial. Convex nondifferentiable optimization: A survey focused on the analytic center
cutting plane method. Optimization Methods and Software, 17(5):805?867, 2002.
[20] J. J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 16(5), 1994.
[21] A. Asuncion and D.J. Newman.
UCI Machine
sity of California, Irvine, School of Information
http://www.ics.uci.edu/?mlearn/MLRepository.html.
8
Learning Repository.
Univerand Computer Sciences, 2007.
| 3339 |@word repository:2 version:1 eliminating:1 nd:2 decomposition:5 citeseer:1 tr:8 minus:1 initial:2 contains:2 score:2 ati:2 current:4 comparing:1 yet:2 written:1 must:1 numerical:4 analytic:11 cheap:2 plot:1 update:6 stationary:1 intelligence:3 plane:9 beginning:1 ith:6 steepest:1 epanechnikov:1 provides:2 zhang:1 direct:1 prove:1 combine:1 polyhedral:2 introduce:1 manner:1 blast:1 expected:1 nonseparable:1 becomes:1 provided:1 project:1 bounded:3 argmin:1 interpreted:3 substantially:1 eigenvector:2 string:1 transformation:8 nj:2 pseudo:1 concave:4 classifier:1 control:2 bertsekas:1 positive:29 tends:2 limit:3 switching:1 plus:1 bredensteiner:1 bennet:1 bi:1 yj:1 testing:2 practice:1 definite:1 digit:2 empirical:1 vert:1 projection:5 boyd:1 protein:2 get:5 onto:2 interior:1 ronny:1 impossible:1 applying:1 www:1 equivalent:2 optimize:1 center:13 maximizing:1 go:2 vit:2 convex:17 survey:1 formulate:4 stabilizing:1 focused:1 vandenberghe:1 stability:1 target:3 suppose:1 exact:1 programming:3 us:2 diabetes:2 pa:1 lanckriet:3 recognition:1 cut:2 database:2 ep:5 solved:7 haasdonk:1 worst:4 calculate:2 recalculate:1 region:5 ran:1 csi:1 transforming:1 complexity:11 cuturi:1 cristianini:2 ong:2 solving:8 rewrite:2 localization:3 usps:10 easily:1 joint:1 k0:21 various:3 train:2 distinct:1 describe:2 artificial:1 newman:1 neighborhood:1 whose:1 heuristic:2 quite:1 solve:4 s:5 otherwise:2 statistic:2 noisy:3 advantage:1 sequence:1 eigenvalue:23 differentiable:4 propose:1 uci:5 description:1 olkopf:1 convergence:5 optimum:1 produce:3 converges:5 sity:1 object:1 clearer:1 liver:4 school:1 b0:2 implemented:1 predicted:2 direction:2 closely:1 hull:3 elimination:1 require:2 generalization:2 marco:1 ic:1 earth:1 favorable:1 label:1 weighted:1 minimization:3 genomic:1 always:1 modified:1 improvement:1 rank:5 polyhedron:1 sense:1 stopping:2 el:1 typically:1 a0:3 cunningham:1 transformed:1 provably:1 issue:1 classification:24 dual:12 among:1 denoted:1 overall:1 html:2 constrained:1 smoothing:1 integration:1 never:1 having:1 emd:1 psu:1 represents:1 k2f:5 nearly:1 noble:1 report:1 hint:1 randomly:3 simultaneously:2 mover:1 cheaper:1 argmax:1 geometry:1 psd:1 detection:1 interest:1 highly:2 simpson:1 mining:1 alignment:1 semidefinite:24 primal:2 overton:1 aspremon:1 closer:1 euclidean:1 taylor:1 stopped:1 instance:3 column:2 earlier:2 cover:1 formulates:1 maximization:3 cost:2 deviation:2 entry:2 too:1 perturbed:1 st:2 international:6 siam:2 continuously:1 postulate:1 choose:1 derivative:1 coefficient:2 satisfy:1 permanent:1 explicitly:6 vi:6 break:2 view:1 closed:1 option:1 asuncion:1 contribution:1 decribed:1 formed:1 minimize:2 accuracy:7 who:1 efficiently:2 correspond:2 handwritten:2 comparably:1 lu:1 mlearn:1 reach:1 stop:2 irvine:1 dataset:1 popular:1 recall:7 hilbert:1 organized:1 back:3 appears:1 alexandre:1 higher:2 formulation:3 done:1 though:1 just:1 smola:2 until:1 hand:1 transport:1 nonlinear:1 assessment:1 scientific:1 mary:1 effect:1 true:4 homology:1 hence:5 regularization:2 symmetric:2 noted:1 mlrepository:1 suboptimality:1 criterion:2 complete:1 performs:1 image:2 meaning:1 sigmoid:1 perturbing:1 volume:1 discussed:2 interpretation:5 fare:1 numerically:2 cambridge:1 ai:4 canu:1 zeroing:1 saigo:1 afer:1 similarity:8 add:1 orfe:2 recent:1 certain:1 nonconvex:3 yi:1 seen:3 minimum:2 greater:1 determine:1 maximize:2 dashed:1 semi:1 multiple:5 full:5 reduces:1 smooth:2 technical:1 faster:2 cross:1 bach:1 lin:2 divided:1 plugging:3 feasibility:1 iteration:13 kernel:101 sometimes:2 histogram:1 achieved:1 standpoint:1 sch:2 unlike:1 subject:6 spirit:1 jordan:3 affect:1 finish:1 stall:1 inner:6 shift:2 motivated:1 handled:1 expression:1 bartlett:1 penalty:3 wo:1 matlab:1 useful:1 eigenvectors:1 maybe:1 amount:2 vial:1 svms:2 category:2 differentiability:1 reduced:2 http:1 percentage:2 notice:1 track:1 correctly:2 write:3 ist:1 key:1 indefinite:43 four:1 threshold:2 penalizing:1 libsvm:1 thresholded:1 subgradient:2 cone:2 uncertainty:3 family:1 almost:3 ueda:1 wu:1 bound:7 ki:2 guaranteed:2 gomez:1 display:2 convergent:2 fold:1 quadratic:8 adapted:1 occur:1 constraint:6 infinity:2 n3:2 software:1 min:5 subgradients:1 relatively:2 uf:1 combination:2 kalousis:1 smaller:1 slightly:3 reconstructing:1 making:1 dv:1 multiplicity:1 ghaoui:1 nonempty:1 letting:1 flip:2 tractable:1 available:1 operation:1 apply:2 spectral:2 alternative:1 slower:1 original:7 hinge:1 calculating:1 classical:1 objective:10 quantity:1 flipping:1 strategy:1 costly:1 diagonal:1 gradient:12 distance:7 athena:1 outer:2 nondifferentiable:2 code:1 pointwise:1 reformulate:1 kk:6 minimizing:2 ellipsoid:1 favorably:1 negative:5 motivates:1 unknown:1 perform:3 upper:3 observation:4 finite:1 descent:1 displayed:1 situation:1 rn:9 perturbation:1 reproducing:1 smoothed:1 required:1 california:1 smo:3 learned:1 quadratically:1 polytopes:1 proceeds:1 below:3 usually:1 pattern:2 program:9 max:13 including:1 shifting:1 treated:1 scheme:1 library:1 conic:2 created:1 aspremont:1 sn:7 text:1 geometric:1 embedded:1 loss:3 generation:1 limitation:1 versus:1 validation:1 proxy:6 mercer:1 bypass:1 row:1 penalized:1 last:1 infeasible:1 side:1 absolute:1 evaluating:2 computes:1 author:1 projected:6 preprocessing:1 transaction:2 robustification:2 compact:1 ignore:1 cutting:9 approximate:1 keep:2 dealing:1 overfitting:1 xi:3 search:1 table:4 learn:2 robust:7 williamson:1 excellent:1 necessarily:2 complex:1 constructing:2 diag:1 main:3 linearly:2 noise:2 edition:1 n2:1 denoise:2 positively:1 referred:1 embeds:1 precision:4 explicit:3 pe:2 down:2 specific:1 showing:1 svm:24 exists:1 incorporating:1 restricting:1 magnitude:1 margin:1 gap:9 sorting:2 simply:3 twentieth:1 akutsu:1 accpm:4 u2:1 chang:1 applies:1 viewed:1 formulated:1 replace:2 feasible:6 hard:1 infinite:1 ucd:1 hyperkernels:1 called:1 total:1 duality:7 atsch:1 support:6 bioinformatics:1 princeton:6 tested:1 handling:1 |
2,580 | 334 | VLSI Implementations of Learning
and Memory Systems: A Review
Mark A. Holler
Intel Corporation
2250 Mission College Blvd.
Santa Clara, Ca. 95052-8125
ABSTRACT
A large number of VLSI implementations of neural network models
have been reported. The diversity of these implementations is
noteworthy. This paper attempts to put a group of representative
VLSI implementations in perspective by comparing and contrasting them. Design trade-offs are discussed and some suggestions forthe
direction of future implementation efforts are made.
IMPLEMENTATION
Changing the way information is represented can be beneficial. For example a change
of representation can make information more compact for storage and transmission.
Implementation of neural computational models is just the process of changing the
representation of a neural model from mathmatical symbolism to a physical embodiement for the purpose of shortening the time it takes to process information according
to the neural model.
FLEXIBIliTY VS. PERFORMANCE
Today most neural models are already implemented in silicon VLSI, in the form of programs running on general purpose digital von Neumann computers. These machines
are available at low cost and are highly flexible. Their flexibility results from the ease
with which their programs can be changed. Maximizing flexibility, however, usually
results in reduced performance. A program will often have to specify several simple op-
993
994
Holler
erations to carry out one higher level operation. An example is performing a sequence
of shifts and adds to accomplish a multiplication. Higher level functions can be directly
implemented but more hardware is required and that hardware can't be used to execute
other high level functions. Flexibility is lost. This trade-off between flexibility and
performance is a fundamen tal issue in computational device design and will be observed
in the devices reviewed here.
GROUND RULES
The neural network devices which will be discussed each consist of a set of what could
loosely be called "artificial neurons". The artificial neurons typically calculate the inner
product of an input vector and a stored weight vector t a sum of products of inputs times
weights. An artificial synapse stores one weight and calculates one product or connection each time a new input is provided. The basic unit of computation is a "connection"
and the basic measure of performance is the number of connections the neural network
can perform per second t (CPS). The CPS number is directly related to how fast the Chip
will be able to perform mappings from input to output or recognize input patterns. The
artificial neurons also include a non-linear thresholding function.
The comparison done here is restricted to devices which fit within this definition and as
a result a number of important neural devices such as those which perform early vision
processing or dynamical processing for optimization are not considered. In the interest
of brevity only representative state of the art devices are presented.
COMPARISON CRITERIA
The criteria for comparison are based on what would be important to a user: Performance,
Capability, Cost, Flexibility/ Ease ofApplication.
In addition to the CPS measure of performance there is also a measure of how fast a Chip
can learn. How many connection or weight updates the Chip can calculate and store per
second, (CUPS) is an important performance measure for the chips which have learning
capability. Three of the nine chips examined have learning on Chip.
Important capabilities to consider are how big a network the chip can simulate, what
precision of calculation the chip provides and how independent the Chip is during
learning. Table 1 provides neuron and synapse counts which indicate the maximum size
network each Chip can implement. The synaptic function and precision are noted in
another column and comments about learning capability are also provided.
An interesting figure of merit is the ratio of CPS to the number of weigh ts. This CPS per
weigh t ratio will be referred to as the CPSPW. This figure of meri t varies by over a factor
of 1000 for the 9 chips considered and all have ratios much higher than typical von Neumann machines or the human brain. See the last column of Table 1. The significance
of this disparity will be discussed later.
TABLE 1. VLSI Neural Network Implementations
Synapse
Area
learning
Algorithm
Neurons
NA
Off chip
8
2048
SOB
lb x 1-4b NA
Prodx:t
Off chip
256
SK-32K
Alspector, L, Allen, R_[3J O. lB
Jayakumar, A., Bellcore
5b x 5b
Prodx:t
_lB
Arima, Y-, et al [4J
Mitsubishi Electric
5_68
lb x 6b
prodx:t
1.4B
Hammerstran, D., et al,[5J
Adaptive Solutions
1.68
1-16b x
1-16b
nul tiple
.24B
5b x 5b
prodx:t
NA
6bx6b
prodx:t
NA
4b x 4b
prodx:t
NA
CPS
Micro Devices [lJ
fIl1220 Neural Bit Slice
H.Graf, D.Henderson,[2J
AT&T Bell labs
0.01B
Connect
Type
CUPS
lb x 16b
Prodx:t
Boltzmam
32
Synapses Technology
992
?
.9u CMOS
1.2u CMOS
Weights
Config.
Avai l.
Ext.
Board level
Avai l.
No
Chip level
Board in
92
No
Board level
No
Price
S45
?
If!
5100
CPSPW
4883
1760 2.5 - 1OM
~
Reseach
58344
100806
en
~
~
S
-
"0
Agranat, R., et al [6J
Ca. Inst. Tech.
Yasunaga, M., et al
Hitach; Wafer Scale
[n
Tanl inson, M., et al [8J
Neural Semiconductor
0.5B
2.3B
0_ 1B
Boltzmam
336
28000
1u CMOS
No
Chip level
No
Research
4900
200000
tI>
S
.....
=
~
.....
.....
tI>
ManYi
Back-Prop
etc.
64
Off chip
256
Off wafer
Off chip
1152
32
128K-2M
65536
73700
1024
.8u CMOS
Multi-Field
die
No
Chip level
No
NA
1400800-3.11(
0
=
rA
2u CMOS
CCD
No
.8u CMOS
Gate Array
8- 5" wafers
No
1.2u CMOS
No
Board level
No
Research
560
7629
0
I-h
r""
tI>
Wafer level
No
>5101(
410000
31208
8
.....
=
(JQ
Board level
4/91
5900
23000
97656
~
Q.
~
tI>
Holler, M.,et al [9]
Intel Corp. 80170, ETANN
2B
6b x 6b
product
NA
Off chip
64
10240
lu CMOS
EEPROM
NonVol.
Chip level
Avai l.
w/Tools
5940
2009
Brain
PC
195313
100
S
~
en
'<
rA
.....
tI>
S
rA
""""
U1
996
Holler
In addition to pricing information, what little exists, Table 1 includes the effective
synapse area and process technology to give some indication of the relative cost of the
various designs.
Finally to include something which suggests how flexible the chips are, comments are
included in Table 1 to indicate whether or not the synaptic function, the learning
algorithm and the network architecture, of each chip can be changed. Also to be
considered is how hard it is to set or continuously refresh weights whether analog or
digital. Analog vs. Digital I/O is a consideration as is availability and development tools.
Demonstration in real applications would be another indicator of success but, none of
these chips has yet reached this milestone.
COMPARISON
The first device [1 ], from Micro Devices, is a digital neural network which leaves the
weight memory off Chip. Its eight 16 bit by one bit serial synapse multipliers are multiplexed which keeps the effective synapse cell size down. Using a single synaptic
multiplier per neuron makes the total compute time for a neuron's sum of products dependent on how many inputs are supplied to a neuron. One positive aspect of this
architecture is that any arbitrary number of inputs per neuron can be processed as long
as the neuron accumulator is wide enough not to overflow when a worst case large sum
of products is accumulated.
The Micro Devices Chip shares this multiplexed synapse approach with the Adaptive
Solutions Xl [5]and the CCD based design [6] by Agranat et al at Cal-Tech although
these two chips include weight memory on Chip to attain much better data transfer performance from the weight store to the synaptic processors. The multiplexed synapse
approach is a good one for reducing the effective synapse size as can be seen by
comparing the synapse area for these three chips[1,5,6] to those ofthe other Chips. [5,6]
have the two smallest cell sizes.
Micro Devices was first to introduce a commercial neural network Chip and developent
tools. They also have the lowest cost chip available. Its all digital interface makes it easy
to design in. It's only significant limitations are its low neuron count and the fact that
it can only accept binary inputs and output binary activations.
Hitachi's wafer scale neural network[7] designed with gate array technology uses pulse
stream data representations as does the Neural Semiconductor implementation[8].
Pulse stream representations make the implementation of a digital multiplier trivial. It
becomes just an AND gate. One drawback ofthis approach is that the user must convert
his input data to uncorrelated pulse streams. The Hitachi design is also interesting because it is clearly designed to take advantage of the fault tolerant aspect of neural networks. The system they have built consists of eight wafers which are very likely to have
at least several bad die. The automated gate array design used in the Hitachi resulted
in the largest synapse area at 410,000 u2?
VLSI Implementations of Learning and Memory Systems
Neural Semiconductor's design puts the neuron units on a separate chip from its
synaptic units. This allows variable width input vectors with a large upper bound.
The CCD based design from Cal-Tech[6] is most noteworthy for its small cell size,
560 u 2 , and high synapse count which results from the use of multiplexed synaptic
processors and analog storage in a CCD. The drawback of this type of weight storage
is that it must be refreshed every few milliseconds at higher temperatures.
Intel's 80170 [9] uses analog non-volatile weight storage and uses a basic characteristic
of neural networks to advantage. It uses the adaptation that is going on during learning
to adapt to variations in the analog circuit computing elements on the chip. This is noteworthy because it is another example of putting one of the properties of neural networks
to use to enable a design approach different from conventional digital VLSI design.
The AT&T Chip reported by Oraf &Henderson [2] has achieved the highest CPS rate,
80B, of any of the chips. It was designed with handprinted character recognition in mind
and as a result accepts only binary inputs (black and white). It uses a hybrid circuit design
approach, digital for inputs and weight storage but analog summation in the form of currents. This chip is flexible in that its weight precision can be traded offfor higher synapse
count.
The last three chips[3,4,5] all have learning on Chip. Two of them use Boltzmann
learning which has been shown by Hinton [11] to be a form of gradient decent learning
like Back-Propagation. These are the Bellcore chip[3] reported by J. Alspector, R.
Allen and A. Jayakumar and the chip reported by Y. Arima et al at Mitsubishi[4]. The
Misubishi Chip has the most impressive number for learning performance and the
second best mapping performance at 5.6B CPS. Its one drawback is that the analog
weights it learns are volatile and must be refreshed. Bellcore's Boltzmann machine uses
digital weight storage which does not require refresh. However, as you will notice the
Bellcore synapse cell size is lOX larger due to the use of digital storage and a slightly
lower density 1.2u technology.
The Adaptive Solutions chip with programmable learning and programmable synaptic
function represents the flexibility end of the performance/flexibility trade-off. It is a
single instruction multiple data path (SIMD) von Neumann machine. Its 64 synaptic
processors are multiplexed up to 4096 times for eight bit weights making the effective
synapse cell size very small, 1400 u2 in spite of using digital SRAM forweight storage and
fully digital synapse processors. This Chip has the second smallest cell size primarily due
to its multiplexing of the synaptic processing elements and because it multiplexes them
more times than any of the other designs.
CONNECTIONS PER SECOND PER WEIGHT
(CPSPW)
The ratio of connections/second per weight can be estimated for biological systems to
be on the order of 100 assuming one weight is stored in each synapse. If neurons are
997
998
Holler
firing 100 times per second then each of the synapses must be processing pulses about
100 times per second hence the CPSPW oflOO. This number is clearly related to neuron
firing rate. Less obvious is how CPSPW might be connected with the precison of the
biological computing elements and the time frame in which the whole network seeks to
produce final results.
Following von Neumann's arguments [12] arithmetical error grows in proportion to the
number of steps of processing. This is partly due to round off errors and partly due to
amplification of errors that occur early in the calculations. Biological neurons have
limited preCision due to their analog nature. If their calculations are accurate to within
1% and they are involved in a calculation that involves propagation of results through
100 neurons in sequence then the accumulated error could be as high as 100% meaning
that the answer could be competely wrong. Any further calculations using this result
would be useless. In other words, a 100 step calculation is the longest calculation you
might expect a biological system to attempt to do because of its limited preciSion. Since
the time frame that biological systems are typically concerned with is around 1 second
one might expect to see these biological systems executing about 100 operations for each
processing element in this interval. This appears to be the case. Executing any more operations than this would produce meaningless results due to the accumulation of numerical error.
A rule of thumb which summarizes the suggested relationship between CPSPW,
precision and the time frame of interest would be: Thenumberofconnectionsexecuted
per weight in the intervaIofinterest should be equaIto the dynamic range ofthe weights. The
dynamiC range of a weight is just the inverse of its precision or the maximum possible
weight value minus the minimum weight value divided by the smallest increment in a
weight which has significance.
Motor control, vision, handwriting and speech recognition tasks all fall within the
"human time frame". The rule of thumb suggests that if neural network implementations with limited precision weights are used to solve these problems then these systems
are likely to work best with the same CPSPW as biological neural systems, around 100.
Since all of the neural network implementations reviewed here have CPSPW's well
above 100 we might conclude that they are not optimal for these human time frame
tasks. They don't have enough weights relative to their processing power. Standard von
Neumann computers have CPSPW's which are much lower than those of biological systems. The number of operations per second per word of memory in a typical von
Neumann machine is around 1. Von Neumann machines today don't have enough processing power relative to their memory size to be optimal for executing neural network
solutions to problems in the human time frame.
For systems where results are sought in a time frame shorter than the human time frame
a higher CPSPW should be used according to the rule of thumb. All of the designs
reviewed here have a CPSPW much higher than 100. See the right most column of
VLSI Implementations of Learning and Memory Systems
Table-I. The AT&T chip [2] has a CPSPW ratio in the millions and many of the
chips[3,4,7,8,9] have a ratio around 100,000. Chips with low CPSPW's are the same
chips that multiplex the synaptic processors [1,5,6]. They can store more weights
because they don't replicate the synaptic processor for every weight. Their processing
rates ar~ lowered which also lowers their CPSPWbecause they have fewer synaptic processors working simultaneously. This is not particularly desirable but results in a better
balance between processing power and memory for tasks which don't need to be done
any faster than in the human time frame.
FUTURE DIRECTION
Von Neumann machines with their high degree of flexibility will continue to be critical in the near term as neural models continue their rapid evolution. Multiprocessor
(> 10) von Neumann machines optimized for neural type calculations are sorely needed.
One such device [5] is already on the horizon. Hennessy and Patterson's quantitative
approach [10] to computer design would be appropriate.
Neural network implementations with more weights are needed for making further
progress in solving the difficult "human time domain" problems of speech and vision.
A 1B CPS machine with 10M weights is needed.. Devices which multiplex the synaptic processing elements appear to be the best candidates for accomplishing this goal.
The challenge here is to keep the bandwidth high even after the weight cache is moved
off Chip.
Using DRAM or "floating gate" memory cells which normally store digital information
to store analog information instead in the same space is an approach which can be used
in conjunction with multiplexed analog synapse processors to achieve a 6-8X improvement in the number ofweights per synaptic processor with little penalty in die area. This
general direction is largely unexplored except for the CCD implementation done by
Agranat et al. [6].
VLSI implementations with fully parallel processing synaptic arrays represent a new
computational capability; higher performance than can be achieved by any other means
with given power and space. The availability ofthis new computing capability will open
up new applications but, will likely take time. The majority of chips reviewed in this
paper fall into this category [2,3,4,7,8,9).
SUMMARY
The VLSI implementations to date are mostly high performance devices with limited
memory. An image of slugs crawling at the speed of sound comes to mind. There will
be applications for these "supersonic slugs", but, they are unlikely to make VLSI neural
networks a big business any time soon. Implementations with more flexibility or more
storage relative to processing power seem to be needed.
999
1000
Holler
References
[1] Yestrebsky, J., Basehore, P. , Reed, J., "Neural Bit SliceComputing Element",
Product Ap. Note TPI02600, Micro Devices, Orlando, Fla.
[2] Graf, H., Henderson, D., "A Reconfigurable CMOS Neural Network", 1990 Int'l
Solid State Circuits Conference, San FranCisco, Ca.
[3] Alspector, J., Allen, R., Jayakumar, A, "Relaxation Networks for Large Supervised
Learning Problems", Advances in Neural Information Processing Systems 3, 1991 San
Mateo, Ca.: Morgan Kaufmann
[4] Arima, Y.,etal, "336 Neuron 28K Synapse Self-Learning Neural Network Chip with
Branch Neuron-Unit ArChitecture", 1991 IEEE Int'l Solid State Circuits Conference
[5]Griffin, M., Hammerstrom, D., et aI, "An 11 Million Transistor Digital Neural
Network Execution Engine", 1991 IEEE Int'l Solid State Circuits Conference
[6] Agranat, R., Neugebauer, c., Yariv, A., "A CCD Based Neural Network Integrated
Circuit with 64K Analog Programmable Synapses", Int'l Joint Conference on Neural
Networks, June 1990.
[7] Gold, M., "Hitachi Unveils Prototype Neural Computer", EE-Times, Dec. 3, 1990.
[8] SU3232, NU32 Data Sheet, Neural Semiconductor Inc. , Carlsbad, Ca.
[9] 80170NW Electrically Trainable Analog Neural Network, Data Sheet, Intel Corp.
Santa Clara, Ca. May 1990.
[10] Hennessy, J.L., Patterson, D. A, ComputerArchitecture, A QuantitativeApproach",
p17, Morgan Kaufmann, San Mateo, Ca., 1990
[11] Hinton, G., "Deterministic Boltzmann Learning performs Steepest Decent in
Weight-Space".
[12] Von Neumann,John, The Computer and the Brain, p26,78 Yale University Press,
New Haven, 1958
[13] Tam, S. et aI, "Learning on an Analog VLSI Neural Network Chip", 1990 IEEE Int'l
Conf. on Systems, Man and Cybernetics.
| 334 |@word proportion:1 replicate:1 open:1 instruction:1 pulse:4 mitsubishi:2 seek:1 etann:1 solid:3 minus:1 carry:1 disparity:1 current:1 comparing:2 clara:2 yet:1 activation:1 must:4 crawling:1 refresh:2 john:1 numerical:1 motor:1 designed:3 update:1 v:2 leaf:1 device:15 fewer:1 sram:1 steepest:1 provides:2 precison:1 consists:1 introduce:1 ra:3 rapid:1 alspector:3 multi:1 brain:3 little:2 cache:1 becomes:1 provided:2 circuit:6 lowest:1 what:4 contrasting:1 corporation:1 quantitative:1 every:2 unexplored:1 ti:5 milestone:1 wrong:1 control:1 unit:4 normally:1 appear:1 positive:1 multiplex:3 semiconductor:4 ext:1 path:1 firing:2 noteworthy:3 ap:1 black:1 might:4 blvd:1 mateo:2 examined:1 suggests:2 ease:2 limited:4 range:2 accumulator:1 yariv:1 lost:1 implement:1 agranat:4 area:5 bell:1 attain:1 word:2 spite:1 cal:2 sheet:2 put:2 storage:9 accumulation:1 conventional:1 deterministic:1 maximizing:1 rule:4 array:4 his:1 variation:1 increment:1 today:2 commercial:1 user:2 us:6 element:6 forthe:1 recognition:2 particularly:1 observed:1 worst:1 calculate:2 connected:1 trade:3 highest:1 weigh:2 dynamic:2 unveils:1 solving:1 patterson:2 oraf:1 joint:1 chip:51 represented:1 various:1 fast:2 effective:4 artificial:4 larger:1 solve:1 slug:2 final:1 sequence:2 indication:1 advantage:2 transistor:1 mission:1 product:7 adaptation:1 date:1 flexibility:10 achieve:1 gold:1 amplification:1 moved:1 transmission:1 neumann:10 produce:2 cmos:9 executing:3 op:1 progress:1 implemented:2 involves:1 indicate:2 come:1 direction:3 drawback:3 human:7 enable:1 hitachi:4 require:1 orlando:1 biological:8 summation:1 around:4 considered:3 ground:1 hennessy:2 mapping:2 nw:1 traded:1 sought:1 early:2 smallest:3 purpose:2 erations:1 largest:1 tool:3 offs:1 clearly:2 r_:1 conjunction:1 neugebauer:1 june:1 longest:1 improvement:1 tech:3 inst:1 dependent:1 multiprocessor:1 accumulated:2 integrated:1 typically:2 lj:1 accept:1 unlikely:1 vlsi:12 jq:1 going:1 issue:1 flexible:3 bellcore:4 development:1 art:1 field:1 simd:1 represents:1 future:2 micro:5 few:1 primarily:1 haven:1 simultaneously:1 recognize:1 resulted:1 floating:1 attempt:2 interest:2 highly:1 henderson:3 reseach:1 pc:1 accurate:1 shorter:1 loosely:1 p17:1 column:3 ar:1 cost:4 reported:4 stored:2 connect:1 answer:1 varies:1 accomplish:1 density:1 off:11 holler:6 continuously:1 na:7 von:10 conf:1 tam:1 jayakumar:3 sob:1 diversity:1 includes:1 availability:2 int:5 inc:1 stream:3 later:1 lab:1 reached:1 capability:6 parallel:1 om:1 accomplishing:1 kaufmann:2 characteristic:1 largely:1 ofthe:2 thumb:3 lu:1 none:1 cybernetics:1 processor:9 synapsis:3 synaptic:15 definition:1 involved:1 obvious:1 refreshed:2 handwriting:1 back:2 appears:1 higher:8 supervised:1 specify:1 synapse:19 execute:1 done:3 just:3 working:1 propagation:2 fla:1 pricing:1 grows:1 multiplier:3 evolution:1 hence:1 white:1 round:1 during:2 width:1 self:1 noted:1 die:3 criterion:2 performs:1 allen:3 interface:1 temperature:1 meaning:1 image:1 consideration:1 volatile:2 physical:1 million:2 discussed:3 analog:13 silicon:1 significant:1 cup:2 ai:2 etal:1 lowered:1 impressive:1 etc:1 add:1 something:1 perspective:1 store:6 corp:2 binary:3 continue:2 success:1 fault:1 arithmetical:1 seen:1 morgan:2 minimum:1 branch:1 multiple:1 desirable:1 sound:1 faster:1 adapt:1 calculation:8 long:1 divided:1 serial:1 calculates:1 basic:3 vision:3 represent:1 achieved:2 cell:7 dec:1 cps:9 addition:2 interval:1 meaningless:1 tiple:1 comment:2 seem:1 ee:1 near:1 config:1 enough:3 easy:1 automated:1 decent:2 concerned:1 fit:1 architecture:3 bandwidth:1 inner:1 prototype:1 shift:1 whether:2 effort:1 penalty:1 speech:2 nine:1 programmable:3 santa:2 shortening:1 hardware:2 processed:1 category:1 reduced:1 supplied:1 millisecond:1 notice:1 estimated:1 per:14 arima:3 wafer:6 group:1 putting:1 changing:2 relaxation:1 yasunaga:1 sum:3 convert:1 inverse:1 you:2 griffin:1 summarizes:1 bit:5 bound:1 yale:1 occur:1 multiplexing:1 tal:1 u1:1 simulate:1 aspect:2 argument:1 speed:1 performing:1 according:2 electrically:1 beneficial:1 slightly:1 character:1 making:2 restricted:1 count:4 needed:4 mind:2 merit:1 end:1 available:2 operation:4 eight:3 appropriate:1 gate:5 hammerstrom:1 running:1 include:3 ccd:6 overflow:1 already:2 gradient:1 separate:1 majority:1 trivial:1 assuming:1 useless:1 relationship:1 reed:1 ratio:6 demonstration:1 balance:1 handprinted:1 difficult:1 mostly:1 dram:1 implementation:19 design:15 boltzmann:3 perform:3 upper:1 neuron:18 t:1 hinton:2 frame:9 lb:5 arbitrary:1 required:1 connection:6 optimized:1 engine:1 accepts:1 meri:1 able:1 suggested:1 usually:1 pattern:1 dynamical:1 challenge:1 program:3 built:1 sorely:1 memory:10 power:5 critical:1 business:1 hybrid:1 indicator:1 technology:4 lox:1 review:1 multiplication:1 graf:2 relative:4 fully:2 expect:2 suggestion:1 interesting:2 limitation:1 digital:14 degree:1 thresholding:1 uncorrelated:1 share:1 changed:2 summary:1 last:2 soon:1 wide:1 fall:2 slice:1 p26:1 made:1 adaptive:3 san:3 compact:1 keep:2 tolerant:1 conclude:1 francisco:1 don:4 sk:1 reviewed:4 table:6 learn:1 transfer:1 nature:1 ca:7 electric:1 domain:1 significance:2 big:2 whole:1 intel:4 representative:2 referred:1 en:2 board:5 precision:8 xl:1 candidate:1 learns:1 down:1 bad:1 reconfigurable:1 consist:1 exists:1 ofthis:2 execution:1 horizon:1 likely:3 u2:2 prop:1 goal:1 eeprom:1 price:1 man:1 change:1 hard:1 included:1 typical:2 except:1 reducing:1 called:1 total:1 partly:2 college:1 mark:1 brevity:1 multiplexed:6 trainable:1 |
2,581 | 3,340 | Kernel Measures of Conditional Dependence
Kenji Fukumizu
Institute of Statistical Mathematics
4-6-7 Minami-Azabu, Minato-ku
Tokyo 106-8569 Japan
fukumizu@ism.ac.jp
Arthur Gretton
Max-Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
arthur.gretton@tuebingen.mpg.de
Xiaohai Sun
Max-Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
xiaohi@tuebingen.mpg.de
Bernhard Sch?olkopf
Max-Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
bernhard.schoelkopf@tuebingen.mpg.de
Abstract
We propose a new measure of conditional dependence of random variables, based
on normalized cross-covariance operators on reproducing kernel Hilbert spaces.
Unlike previous kernel dependence measures, the proposed criterion does not depend on the choice of kernel in the limit of infinite data, for a wide class of kernels. At the same time, it has a straightforward empirical estimate with good
convergence behaviour. We discuss the theoretical properties of the measure, and
demonstrate its application in experiments.
1
Introduction
Measuring dependence of random variables is one of the main concerns of statistical inference. A
typical example is the inference of a graphical model, which expresses the relations among variables
in terms of independence and conditional independence. Independent component analysis employs
a measure of independence as the objective function, and feature selection in supervised learning
looks for a set of features on which the response variable most depends.
Kernel methods have been successfully used for capturing (conditional) dependence of variables
[1, 5, 8, 9, 16]. With the ability to represent high order moments, mapping of variables into reproducing kernel Hilbert spaces (RKHSs) allows us to infer properties of the distributions, such as
independence and homogeneity [7]. A drawback of previous kernel dependence measures, however,
is that their value depends not only on the distribution of the variables, but also on the kernel, in
contrast to measures such as mutual information.
In this paper, we propose to use the Hilbert-Schmidt norm of the normalized conditional crosscovariance operator, and show that this operator encodes the dependence structure of random variables. Our criterion includes a measure of unconditional dependence as a special case. We prove in
the limit of infinite data, under assumptions on the richness of the RKHS, that this measure has an
explicit integral expression which depends only on the probability densities of the variables, despite
being defined in terms of kernels. We also prove that its empirical estimate converges to the kernelindependent value as the sample size increases. Furthermore, we provide a general formulation for
1
the ?richness? of an RKHS, and a theoretically motivated kernel selection method. We successfully
apply our measure in experiments on synthetic and real data.
2
Measuring conditional dependence with kernels
The probability law of a random variable X is denoted by PX , and the space of the square integrable
functions with probability P by L2 (P ). The symbol X?
?Y | Z indicates the conditional independence of X and Y given Z. The null space and the range of an operator T are written N (T ) and
R(T ), respectively.
2.1
Dependence measures with normalized cross-covariance operators
Covariance operators on RKHSs have been successfully used for capturing dependence and conditional dependence of random variables, by incorporating high order moments [5, 8, 16]. We give
a brief review here; see [5, 6, 2] for further detail. Suppose we have a random variable (X, Y ) on
X ? Y, and RKHSs HX and HY on X and Y, respectively, with measurable positive definite kernels
kX and kY . Throughout this paper, we assume the integrability
(A-1)
E[kX (X, X)] < ?,
E[kY (Y, Y )] < ?.
This assumption ensures HX ? L2 (PX ) and HY ? L2 (PY ). The cross-covariance operator ?Y X :
HX ? HY is defined by the unique bounded operator that satisfies
hg, ?Y X f iHY = Cov[f (X), g(Y )]
( = E[f (X)g(Y )] ? E[f (X)]E[g(Y )])
(1)
for all f ? HX and g ? HY . If Y = X, ?XX is called the covariance operator, which is self-adjoint
and positive. The operator ?Y X naturally extends the covariance matrix CY X on Euclidean spaces,
and represents higher order correlations of X and Y through f (X) and g(Y ) with nonlinear kernels.
It is known [2] that the cross-covariance operator can be decomposed into the covariance of the
marginals and the correlation; that is, there exists a unique bounded operator V Y X such that
1/2
1/2
?Y X = ?Y Y VY X ?XX ,
(2)
R(VY X ) ? R(?Y Y ), and N (VY X )? ? R(?XX ). The operator norm of VY X is less than or equal
to 1. We call VY X the normalized cross-covariance operator (NOCCO, see also [4]).
While the operator VY X encodes the same information regarding the dependence of X and Y as
?Y X , the former rather expresses the information more directly than ?Y X , with less influence of the
marginals. This relation can be understood as an analogue to the difference between the covariance
Cov[X, Y ] and the correlation Cov[X, Y ]/(Var(X)Var(Y ))1/2 . Note also that kernel canonical
correlation analysis [1] uses the largest eigenvalue of VY X and its corresponding eigenfunctions [4].
Suppose we have another random variable Z on Z and RKHS (HZ , kZ ), which satisfy the analog
to (A-1). We then define the normalized conditional cross-covariance operator,
VY X|Z = VY X ? VY Z VZX ,
(3)
for measuring the conditional dependence of X and Y given Z, where VY Z and VZX are defined
similarly to Eq. (2). The operator VY X|Z may be better understood by expressing it as
?1/2
?1/2
VY X|Z = ?Y Y ?Y X ? ?Y Z ??1
ZZ ?ZX ?XX ,
where ?Y X|Z = ?Y X ? ?Y Z ??1
ZZ ?ZX can be interpreted as a nonlinear extension of the condi?1
tional covariance matrix CY X ? CY Z CZZ
CZX of Gaussian random variables.
The operator ?Y X can be used to determine the independence of X and Y : roughly speaking,
?Y X = O if and only if X?
?Y . Similarly, a relation between ?Y X|Z and conditional independence,
? = (X, Z) and Y? = (Y, Z) are
X?
?Y | Z, has been established in [5]: if the extended variables X
used, X?
?Y | Z is equivalent to ?X? Y? |Z = O. We will give a rigorous treatment in Section 2.2
Noting that the conditions ?Y X = O and ?Y X|Z = O are equivalent to VY X = O and VY X|Z = O,
respectively, we propose to use the Hilbert-Schmidt norms of the latter operators as dependence
2
measures. Recall that an operator A : H1 ? H2 is called Hilbert-Schmidt
if for complete orP
thonormal systems (CONSs) {?i } of H1 and {?j } of H2 , the sum i,j h?j , A?i i2H2 is finite (see
[13]). For P
a Hilbert-Schmidt operator A, the Hilbert-Schmidt (HS) norm kAk HS is defined by
kAk2HS = i,j h?j , A?i i2H2 . It is easy to see that this sum is independent of the choice of CONSs.
Provided that VY X and VY X|Z are Hilbert-Schmidt, we propose the following measures:
2
I CON D (X, Y |Z) = kVY? X|Z
? kHS ,
(4)
(5)
I N OCCO (X, Y ) = kVY X k2HS .
A sufficient condition that these operators are Hilbert-Schmidt will be discussed in Section 2.3.
It is easy to provide empirical estimates of the measures. Let (X1 , Y1 , Z1 ), . . . , (Xn , Yn , Zn )
(n)
be an i.i.d. sample from the joint distribution. Using the empirical mean elements m
bX =
Pn
Pn
(n)
1
1
b Y = n i=1 kY ( ? , Yi ), an estimator of ?Y X is
i=1 kX ( ? , Xi ) and m
n
(n)
(n)
b (n) = 1 Pn (kY ( ? , Yi ) ? m
?
b Y ) kX ( ? , X i ) ? m
bX , ? H .
YX
i=1
n
X
b (n) and ?
b (n) are defined similarly. The estimators of VY X and VY X|Z are respectively
?
XX
YY
(n)
b (n) + ?n I ?1/2 ?
b (n) ?
b (n) + ?n I ?1/2 ,
VbY X = ?
YY
YX
XX
where ?n > 0 is a regularization constant used in the same way as [1, 5], and
(n)
(n)
(n) (n)
Vb
= Vb
? Vb Vb ,
Y X|Z
YX
YZ
(6)
ZX
(n)
from Eq. (3). The HS norm of the finite rank operator VbY X|Z is easy to calculate. Let GX , GY , and
(n)
(n)
GZ be the centered Gram matrices, such that GX,ij = hkX ( ? , Xi ) ? m
b X , kX ( ? , X j ) ? m
b X i HX
and so on, and define RX , RY , and RZ as RX = GX (GX +n?n In )?1 , RY = GY (GY +n?n In )?1 ,
and RZ = GZ (GZ + n?n In )?1 . The empirical dependence measures are then
(n)
2
I?nCON D ?
VbY? X|Z
= Tr RY? RX? ? 2RY? RX? RZ + RY? RZ RX? RZ ,
(7)
?
HS
(n) 2
I?nN OCCO (X, Y ) ?
VbY X
HS = Tr RY RX ,
(8)
where the extended variables are used for I?nCON D . These empirical estimators, and use of ?n , will be
justified in Section 2.4 by showing the convergence to I N OCCO and I CON D . With the incomplete
Cholesky decomposition [17] of rank r, the complexity to compute I?nCON D is O(r 2 n).
2.2
Inference on probabilities by characteristic kernels
To relate I N OCCO and I CON D with independence and conditional independence, respectively, the
RKHS should contain a sufficiently rich class of functions to represent all higher order moments.
Similar notions have already appeared in the literature: universal kernel on compact domains [15]
and Gaussian kernels on the entire Rm characterize independence via the cross-covariance operator
[8, 1]. We now discuss a unified class of kernels for inference on probabilities.
Let (X , B) be a measurable space, X a random variable on X , and (H, k) an RKHS on X satisfying
assumption (A-1). The mean element of X on H is defined by the unique element m X ? H such
that hmX , f iH = E[f (X)] for all f ? H (see [7]). If the distribution of X is P , we also use mP to
denote mX . Letting P be the family of all probabilities on (X , B), we define the map Mk by
Mk : P ? H,
P 7? mP .
The kernel k is said to be characteristic1 if the map Mk is injective, or equivalently, if the condition
EX?P [f (X)] = EX?Q [f (X)] (?f ? H) implies P = Q.
?
T
The notion of a characteristic kernel is an analogy to the characteristic
function E P [e ?1u X ],
?
?1uT x
which is the expectation of the Fourier kernel kF (x, u) = e
. Noting that mP = mQ iff
EP [k(u, X)] = EQ [k(u, X)] for all u ? X , the definition of a characteristic kernel generalizes the
well-known property of the characteristic function that EP [kF (u, X)] uniquely determines a Borel
probability P on Rm . The next lemma is useful to show that a kernel is characteristic.
1
Although the same notion was called probability-determining in [5], we call it ?characteristic? by analogy
with the characteristic function.
3
Lemma 1. Let q ? 1. Suppose that (H, k) is an RKHS on a measurable space (X , B) with k
measurable and bounded. If H + R (the direct sum of the two RKHSs) is dense in L q (X , P ) for any
probability P on (X , B), the kernel k is characteristic.
Proof. Assume mP = mQ . By the assumption, for any ? > 0 and a measurable set A, there is a
function f ? H and c ? R such that |EP [f (X)] + c ? P (A)| < ? and |EQ [f (Y )] + c ? Q(A)| < ?,
from which we have |P (A) ? Q(A)| < 2?. Since ? > 0 is arbitrary, this means P (A) = Q(A).
Many popular kernels are characteristic. For a compact metric space, it is easy to see that the RKHS
given by a universal kernel [15] is dense in L2 (P ) for any P , and thus characteristic (see also [7]
Theorem 3). It is also important to consider kernels on non-compact spaces, since many standard
random variables, such as Gaussian variables, are defined on non-compact spaces. The next theorem
implies that many kernels on the entire Rm , including Gaussian and Laplacian, are characteristic.
The proof is an extension of Theorem 2 in [1], and is given in the supplementary material.
?
Theorem 2. Let ?(z) be a continuous positive function on Rm with the Fourier transform ?(u),
m
and k be a kernel of the form k(x, y) = ?(x ? y). If for any ? ? R there exists ?0 such that
R ?(?
? (u+?))2
du < ? for all ? > ?0 , then the RKHS associated with k is dense in L2 (P ) for any
?
?(u)
Borel probability P on Rm . Hence k is characteristic with respect to the Borel ?-field.
The assumptions to relate the operators with independence are well described by using characteristic
kernels and denseness. The next result generalizes Corollary 9 in [5] (we omit the proof: see [5, 6]).
Theorem 3. (i) Assume (A-1) for the kernels. If the product kX kY is characteristic, then we have
VY X = O
??
X?
?Y.
? = (X, Z) and k ? = kX kZ . In addition to (A-1), assume that the product k ? kY is a
(ii) Denote X
X
X
characteristic kernel on (X ? Z) ? Y, and HZ + R is dense in L2 (PZ ). Then,
VY X|Z
=O
?
??
X?
?Y | Z.
From the above results, we can guarantee that VY X and VY X|Z
will detect independence and condi?
tional independence, if we use a Gaussian or Laplacian kernel either on a compact set or the whole
of Rm . Note also that we can substitute VY? X|Z
for VY X|Z
in Theorem 3 (ii).
?
?
2.3
Kernel-free integral expression of the measures
A remarkable property of I N OCCO and I CON D is that they do not depend on the kernels under some
assumptions, having integral expressions containing only the probability densityR functions. The
probability EZ [PX|Z ? PY |Z ] on X ? Y is defined by EZ [PY |Z ? PX|Z ](B ? A) E[?B (Y )|Z =
z]E[?A (X)|Z = z]dPZ (z) for A ? BX and B ? BY .
Theorem 4. Let ?X and ?Y be measures on X and Y, respectively, and assume that the probabilities
PXY and EZ [PX|Z ? PY |Z ] are absolutely continuous with respect to ?X ? ?Y with probability
density functions pXY and pX??Y |Z , respectively. If HZ + R and (HX ? HY ) + R are dense in
L2 (PZ ) and L2 (PX ? PY ), respectively, and VY X and VY Z VZX are Hilbert-Schmidt, then we have
Z Z
pX??Y |Z (x, y) 2
pXY (x, y)
CON D
2
I
= kVY X|Z kHS =
?
pX (x)pY (y)d?X d?Y ,
pX (x)pY (y)
X ?Y pX (x)pY (y)
where pX and pY are the density functions of the marginal distributions PX and PY , respectively.
As a special case of Z = ?, we have
2
Z Z
pXY (x, y)
I N OCCO = kVY X k2HS =
? 1 pX (x)pY (y)d?X d?Y .
(9)
X ?Y pX (x)pY (y)
Sketch of the proof (see the supplement for the complete proof). Since it is known [8] that ? ZZ is
?
Hilbert-Schmidt under (A-1), there exist CONSs {?i }?
i=1 ? HX and {?j }j=1 ? HY consisting of
the eigenfunctions of ?XX and ?Y Y , respectively, with ?XX ?i = ?i ?i (?i ? 0) and ?Y Y ?j =
4
?j ?j (?j ?n0). Then, kVY X|Z k2HS admits the expansion
o
P?
2
2
i,j=1 h?j , VY X ?i iHY ? 2h?j , VY X ?i iHY h?j , VY Z VZX ?i iHY + h?j , VY Z VZX ?i iHY .
?
X
Y
Let I+
= {i ? N | ?i > 0} and I+
= {i ? N | ?i > 0}, and define ??i = (?i ? E[?i (X)])/ ?i
?
X
Y
and ??j = (?j ? E[?j (Y )])/ ?j for i ? I+
and j ? I+
. For simplicity, L2 denotes L2 (PX ? PY ).
With the notations ??0 = 1 and ??0 = 1, it is easy to see that the class {??i ??j }i?I+X ?{0},j?I+Y ?{0} is a
CONS of L2 . From Parseval?s equality, the first term of the above expansion is rewritten as
2 P
P
?
?
?
? 2 =P X
= i?I X ,j?I Y ??i ??j , ppXXY
Y EY X ?j (Y )?i (X)
X ,j?I Y h?j , ?Y X ?i iH
,j?I
i?I
i?I+
p
Y
L2
Y
+
+
+
+
+
pXY (x,y)
2
pXY (x,y)
2
P
P
=
pX (x)pY (y)
L ? i?I X E ??i (X)] ? j?I Y E ??j (Y )] ? 1 =
pX (x)pY (y)
L ? 1.
2
+
2
+
By a similar argument, the second and third term of the expansion are rewritten as
p ?Y |Z
2
pX?
?Y |Z
? 1, respectively. This completes the proof.
?2 ppXXY
+ 2 and
X?
pY , pX pY
pX pY
L
L
2
2
Many practical kernels, such as the Gaussian and Laplacian, satisfy the assumptions in the above
theorem, as we saw in Theorems 2 and the remark after Lemma 1. While the empirical estimate
from finite samples depends on the choice of kernels, it is a desirable property for the empirical
dependence measure to converge to a value that depends only on the distributions of the variables.
Eq. (9) shows that, under the assumptions, I N OCCO is equal to the mean square contingency, a
well-known dependence measure[14] commonly used for discrete variables. As we show in Section
2.4, I?nN OCCO works as a consistent kernel estimator of the mean square contingency.
The expression of Eq. (9) can be compared with the mutual information,
Z Z
pXY (x, y)
M I(X, Y ) =
pXY (x, y) log
d?X d?Y .
pX (x)pY (y)
X ?Y
Both the mutual information and the mean square contingency are nonnegative, and equal to zero
if and only if X and Y are independent. Note also that from log z ? z ? 1, the inequality
M I(X, Y ) ? I N OCCO (X, Y ) holds under the assumptions of Theorem 4. While the mutual information is the best known dependence measure, its finite sample empirical estimate is not straightforward, especially for continuous variables. The direct estimation of a probability density function
is infeasible if the joint space has even a moderate number of dimensions.
2.4
Consistency of the measures
It is important to ask whether the empirical measures converge to the population value I CON D and
I N OCCO , since this provides a theoretical justification for the empirical measures. It is known [4]
(n)
that VbY X converges in probability to VY X in operator norm. The next theorem asserts convergence
in HS norm, provided that VY X is Hilbert-Schmidt. Although the proof is analogous to the case of
operator norm, it is more involved to discuss the HS norm. We give it in the supplementary material.
Theorem 5. Assume that VY X , VY Z , and VZX are Hilbert-Schmidt, and that the regularization
constant ?n satisfies ?n ? 0 and ?3n n ? ?. Then, we have the convergence in probability
(n)
kVbY X ? VY X kHS ? 0
and
(n)
kVbY X|Z ? VY X|Z kHS ? 0
(n ? ?).
(10)
In particular, I?nN OCCO ? I N OCCO and I?nCON D ? I CON D (n ? ?) in probability.
2.5
Choice of kernels
As with all empirical measures, the sample estimates I?nN OCCO and I?nCON D are dependent on the
kernel, and the problem of choosing a kernel has yet to be solved. Unlike supervised learning, there
are no easy criteria to choose a kernel for dependence measures. We propose a method of choosing
a kernel by considering the large sample behavior. We explain the method only briefly in this paper.
The basic idea is that a kernel should be chosen so that the covariance operator detects independence
of variables as effectively as possible. It has been recently shown [10], under the independence of
5
4
4
2
2
0
0
1.6
1.4
I
NOCCO
1.2
?2
1
0.8
0.6
?2
0.4
?4
?4
?2
0
2
4
?4
?4
?2
0
2
4
0.2
0
0.2
0.4
0.6
0.8
Angle
Figure 1: Left and Middle: Examples of data (? = 0 and ? = ?/4). Right: The marks ?o? and ?+?
show I?nN OCCO for each angle and the 95th percentile of the permutation test, respectively.
b (n) k2 ([8]) multiplied by n converges to an infinite
X and Y , that the measure HSIC = k?
Y X HS
mixture of ?2 distributions with variance Varlim [nHSIC] = 2k?XX k2HS k?Y Y k2HS . We choose a
kernel so that the bootstrapped variance VarB [nHSIC] of nHSIC is close to this theoretical limit
variance. More precisely, we compare the ratio T = VarB [nHSIC]/Varlim [nHSIC] for various
candidate kernels. In preliminary experiments for choosing the variance parameter ? of Gaussian
kernels, we often observed the ratio decays and saturates below 1, as ? increases. Therefore, we use
? starting the saturation by choosing the minimum of ? among all candidates that satisfy |T ? ? ?| ?
(1 + ?) min? |T? ? ?| for ? > 0, ? ? (0, 1]. We always use ? = 0.1 and ? = 0.5. We can expect
that the chosen kernel uses the data effectively. While there is no rigorous theoretical guarantee, in
the next section we see that the method gives a reasonable result for I?nN OCCO and I?nCON D .
3
Experiments
To evaluate the dependence measures, we use a permutation test of independence for data sets with
various degrees of dependence. The test randomly permutes the order of Y 1 , . . . , Yn to make many
samples independent of (X1 , . . . , Xn ), thus simulating the null distribution under independence.
For the evaluation of I?nCON D , the range of Z is partitioned into Z1 , . . . , ZL with the same number
of data, and the sample {(Xi , Yi ) | Zi ? Z` } within the `-th bin is randomly permuted. The
significance level is always set to 5%. In the following experiments, we always use Gaussian kernels
2
1
e? 2?2 kx1 ?x2 k and choose ? by the method proposed in Section 2.5.
Synthetic data for dependence. The random variables X (0) and Y (0) are independent and uniformly distributed on [?2, 2] and [a, b] ? [?b, ?a], respectively, so that (X (0) , Y (0) ) has a scalar
covariance matrix. (X (?) , Y (?) ) is the rotation of (X (0) , Y (0) ) by ? ? [0, ?/4] (see Figure 1). X (?)
and Y (?) are always uncorrelated, but dependent for ? 6= 0. We generate 100 sets of 200 data.
b (n) k2 , and the mutual information
We perform permutation tests with I?nN OCCO , HSIC = k?
Y X HS
(MI). For the empirical estimates of MI, we use the advanced method from [11], with no need for
explicit estimation of the densities. Since I?nN OCCO is an estimate of the mean square contingency,
we also apply a relevant contingency-table-based independence test ([12]), partitioning the variables
into bins. Figure 1 shows the values of I?nN OCCO for a sample. In Table 1, we see that the results
of I?nN OCCO are stable w.r.t. the choice of ?n , provided it is sufficiently small. We fix ?n = 10?6
for all remaining experiments. While all the methods are able to detect the dependence, I?nN OCCO
with the asymptotic choice of ? is the most sensitive to very small dependence. We also observe
the chosen parameters ?Y for Y increase from 0.58 to 2.0 as ? increases. The small ?Y for small ?
seems reasonable, because the range of Y is split into two small regions.
Chaotic time series. We evaluate a chaotic time series derived from the coupled H?enon map. The
variables X and Y are four dimensional: the components X1 , X2 , Y1 , and Y2 follow the dynamics
(X1 (t + 1), X2 (t + 1)) = (1.4 ? X1 (t)2 + 0.3X2 (t), X1 (t)), (Y1 (t + 1), Y2 (t + 1)) = (1.4 ?
{?X1 (t)Y1 (t) + (1 ? ?)Y2 (t)2 } + 0.1Y2 (t), Y1 (t)), and X3 , X4 , Y3 , Y4 are independent noise with
N (0, (0.5)2 ). X and Y are independent for ? = 0, while they are synchronized chaos for ? > 0
(see Figure 2 for examples). A sample consists of 100 data generated from this system. Table 2
6
Angle (degree)
I?nN OCCO (? = 10?4 , Median)
I?nN OCCO (? = 10?6 , Median)
I?nN OCCO (? = 10?8 , Median)
I?nN OCCO (Asymp. Var.)
HSIC (Median)
HSIC (Asymp. Var.)
MI (#Nearest Neighbors = 1)
MI (#Nearest Neighbors = 3)
MI (#Nearest Neighbors = 5)
Conting. Table (#Bins= 3)
Conting. Table (#Bins= 4)
Conting. Table (#Bins= 5)
0
94
92
93
94
93
93
93
96
97
100
98
98
4.5
23
20
15
11
92
44
62
43
49
96
29
82
9
0
1
0
0
63
1
11
0
0
46
0
5
13.5
0
0
0
0
5
0
0
0
0
9
0
0
18
0
0
0
0
0
0
0
0
0
1
0
0
22.5
0
0
0
0
0
0
0
0
0
0
0
0
27
0
0
0
0
0
0
0
0
0
0
0
0
31.5
0
0
0
0
0
0
0
0
0
0
0
0
36
0
0
0
0
0
0
0
0
0
0
0
0
40.5
0
0
0
0
0
0
0
0
0
0
0
0
45
0
0
0
0
0
0
0
0
0
0
0
0
Table 1: Comparison of dependence measures. The number of times independence is accepted out
of 100 permutation tests is shown. ?Asymp. Var.? is the method in Section 2.5. ?Median? is a
heuristic method [8] which chooses ? as the median of pairwise distances of the data.
2
2
5
1
Y1(t)
X2(t)
1
0
I(Yt+1,Xt|Yt)
I(Xt+1,Yt|Xt)
1
0.8
Thresh (?=0.05)
4
Thresh (?=0.05)
3
0.6
0
2
0.4
?1
?1
0
X1(t)
1
2
?2
1
0.2
?1
?2
?2
?1
0
1
0
0
2
X1(t)
(a) Plot of H?enon map (b) Xt,1 -Yt,1 (? = 0.25)
0.2
0.4
0.6
(c) I(Xt+1 , Yt |Xt )
0
0
0.2
0.4
0.6
(d) I(Yt+1 , Xt |Yt )
Figure 2: Chaotic time series. (a,b): examples of data. (c,d) examples of I?nCON D (colored ?o?) and
the threshholds of the permutation test with significance level 5% (black ?+?).
shows the results of permutation tests of independence for the instantaneous pairs (X(t), Y (t)) 100
t=1 .
The proposed I?nN OCCO outperforms the other methods in capturing small dependence.
Next, we apply I?nCON D to detect the causal structure of the same time series. Note that the series X
is a cause of Y for ? > 0, but there is no opposite causality, i.e., Xt+1 ?
?Yt | Xt and Yt+1 6?
?Xt | Yt .
In Table 3, it is remarkable that I?nCON D detects the small causal influence from Xt to Yt+1 for
? ? 0.1, while for ? = 0 the result is close to the theoretical value of 95%.
Graphical modeling from medical data. This is the inference of a graphical model from data with
no time structure. The data consist of three variables, creatinine clearance (C), digoxin clearance
(D), urine flow (U). These were taken from 35 patients, and analyzed with graphical models in [3,
Section 3.1.4.]. From medical knowledge, D should be independent of U when controlling C. Table
4 shows the results of the permutation tests and a comparison with the linear method. The relation
D?
?U | C is strongly affirmed by I?nCON D , while the partial correlation does not find it.
? (strength of coupling)
I?nN OCCO
HSIC
MI (k = 3)
MI (k = 5)
MI (k = 7)
0.0
97
75
87
87
87
0.1
66
70
91
88
86
0.2
21
58
83
75
75
0.3
1
52
73
67
64
0.4
0
13
23
23
21
0.5
1
1
6
5
5
0.6
0
0
0
0
0
Table 2: Results for the independence tests for the chaotic time series. The number of times independence was accepted out of 100 permutation tests is shown. ? = 0 implies independence.
7
? (coupling)
I?nN OCCO
HSIC
0.0
97
94
H0 : Yt is not a cause of Xt+1
0.1 0.2 0.3 0.4 0.5
96
93
85
81
68
94
92
81
60
73
0.6
75
66
0.0
96
93
H0 : Xt is not a cause of Yt+1
0.1 0.2 0.3 0.4 0.5
0
0
0
0
0
95
85
56
1
1
0.6
0
1
Table 3: Results of the permutation test of non-causality for the chaotic time series. The number of
times non-causality was accepted out of 100 tests is shown.
Kernel measure
I?nCON D P -value
D?
?U | C
1.458
0.924
C?
?D
0.776
<0.001
C?
?U
0.194
0.117
D?
?U
0.343
0.023
Linear method
(partial) correl.
Parcorr(D, U |C)
0.4847
Corr(C, D)
0.7754
Corr(C, U )
0.3092
Corr(D, U )
0.5309
P -value
0.0037
0.0000
0.0707
0.0010
Table 4: Graphical modeling from the medical data. Higher P -values indicate (conditional) independence more strongly.
4
Concluding remarks
There are many dependence measures, and further theoretical and experimental comparison is important. That said, one unambiguous strength of the kernel measure we propose is its kernel-free
population expression. It is interesting to ask if other classical dependence measures, such as the
mutual information, can be estimated by kernels (in a broader sense than the expansion about independence of [9]). A relevant measure is the kernel generalized variance (KGV [1]), which is based
on a sum of the logarithm of the eigenvalues of VY X , while I N OCCO is their squared sum. It is also
interesting to investigate whether the KGV has a kernel-free expression. Another topic for further
study is causal inference with the proposed measure, both with and without time information ([16]).
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
F. Bach and M. Jordan. Kernel independent component analysis. J. Machine Learning Res., 3:1?48, 2002.
C. Baker. Joint measures and cross-covariance operators. Trans. Amer. Math. Soc., 186:273?289, 1973.
D. Edwards. Introduction to graphical modelling. Springer verlag, New York, 2000.
K. Fukumizu, F. Bach, and A. Gretton. Statistical consistency of kernel canonical correlation analysis. J.
Machine Learning Res., 8:361?383, 2007.
K. Fukumizu, F. Bach, and M. Jordan. Dimensionality reduction for supervised learning with reproducing
kernel Hilbert spaces. J. Machine Learning Res., 5:73?99, 2004.
K. Fukumizu, F. Bach, and M. Jordan. Kernel dimension reduction in regression. Tech Report 715, Dept.
Statistics, University of California, Berkeley, 2006.
A. Gretton, K. Borgwardt, M. Rasch, B. Scho?lkopf, and A. Smola. A kernel method for the two-sampleproblem. Advances in NIPS 19. MIT Press, 2007.
A. Gretton, O. Bousquet, A. Smola, and B. Scho?lkopf. Measuring statistical dependence with HilbertSchmidt norms. 16th Intern. Conf. Algorithmic Learning Theory, pp.63?77. Springer, 2005.
A. Gretton, R. Herbrich, A. Smola, O. Bousquet and B. Sch o?lkopf. Kernel Methods for Measuring
Independence. J. Machine Learning Res., 6:2075?2129, 2005.
A. Gretton, K. Fukumizu, C. Teo, L. Song, B. Scho?lkopf, A. Smola. A Kernel Statistical Test of Independence. Advances in NIPS 21. 2008, to appear.
A. Kraskov, H. St?
ogbauer, and P. Grassberger. Estimating mutual information. Physical Review E, 69,
066138-1?16, 2004.
T. Read and N. Cressie. Goodness-of-Fit Statistics for Discrete Multivariate Data. Springer-Verlag, 1988.
M. Reed and B. Simon. Functional Analysis. Academic Press, 1980.
A. R?
enyi. Probability Theory. Horth-Holland, 1970.
I. Steinwart. On the influence of the kernel on the consistency of support vector machines. J. Machine
Learning Res., 2:67?93, 2001.
X. Sun, D. Janzing, B. Scho?lkopf, and K. Fukumizu. A kernel-based causal learning algorithm. Proc.
24th Intern. Conf. Machine Learning, 2007 to appear.
S. Fine and K. Scheinberg Efficient SVM Training using Low-Rank Kernel Representations J. Machine
Learning Res., 2:243?264, 2001.
8
| 3340 |@word h:9 briefly:1 middle:1 norm:10 seems:1 covariance:16 decomposition:1 creatinine:1 tr:2 reduction:2 moment:3 series:7 rkhs:8 bootstrapped:1 outperforms:1 yet:1 written:1 grassberger:1 plot:1 n0:1 colored:1 provides:1 math:1 gx:4 herbrich:1 direct:2 prove:2 consists:1 pairwise:1 theoretically:1 behavior:1 mpg:3 roughly:1 ry:6 detects:2 decomposed:1 considering:1 affirmed:1 provided:3 xx:9 bounded:3 notation:1 baker:1 estimating:1 null:2 interpreted:1 unified:1 guarantee:2 berkeley:1 y3:1 rm:6 k2:2 zl:1 partitioning:1 medical:3 omit:1 yn:2 planck:3 appear:2 positive:3 understood:2 limit:3 despite:1 black:1 range:3 unique:3 practical:1 definite:1 x3:1 chaotic:5 universal:2 empirical:13 close:2 selection:2 operator:29 influence:3 py:19 equivalent:2 measurable:5 map:4 yt:13 straightforward:2 starting:1 simplicity:1 permutes:1 estimator:4 mq:2 population:2 notion:3 justification:1 analogous:1 hsic:6 controlling:1 suppose:3 us:2 cressie:1 element:3 satisfying:1 ep:3 observed:1 solved:1 calculate:1 cy:3 region:1 schoelkopf:1 ensures:1 sun:2 richness:2 complexity:1 dynamic:1 depend:2 joint:3 various:2 enyi:1 choosing:4 h0:2 heuristic:1 supplementary:2 ability:1 cov:3 statistic:2 transform:1 eigenvalue:2 propose:6 product:2 relevant:2 iff:1 kx1:1 adjoint:1 ihy:5 asserts:1 olkopf:1 ky:6 convergence:4 converges:3 coupling:2 ac:1 nearest:3 ij:1 eq:6 edward:1 czz:1 soc:1 kenji:1 implies:3 indicate:1 synchronized:1 rasch:1 drawback:1 tokyo:1 centered:1 material:2 bin:5 behaviour:1 hx:7 fix:1 preliminary:1 biological:3 minami:1 parcorr:1 extension:2 hold:1 sufficiently:2 mapping:1 algorithmic:1 estimation:2 proc:1 saw:1 sensitive:1 largest:1 teo:1 successfully:3 fukumizu:7 mit:1 azabu:1 gaussian:8 always:4 rather:1 pn:3 broader:1 corollary:1 derived:1 rank:3 indicates:1 integrability:1 modelling:1 tech:1 contrast:1 rigorous:2 hkx:1 detect:3 sense:1 inference:6 tional:2 dependent:2 nn:18 entire:2 relation:4 germany:3 among:2 denoted:1 special:2 mutual:7 marginal:1 equal:3 field:1 having:1 zz:3 x4:1 represents:1 look:1 report:1 employ:1 randomly:2 homogeneity:1 crosscovariance:1 consisting:1 investigate:1 evaluation:1 mixture:1 analyzed:1 unconditional:1 hg:1 integral:3 partial:2 arthur:2 injective:1 spemannstra:3 asymp:3 kvy:5 euclidean:1 incomplete:1 logarithm:1 re:6 causal:4 theoretical:6 mk:3 modeling:2 measuring:5 zn:1 goodness:1 characterize:1 synthetic:2 chooses:1 st:1 density:5 borgwardt:1 squared:1 containing:1 choose:3 conf:2 bx:3 japan:1 de:3 gy:3 includes:1 satisfy:3 mp:4 depends:5 h1:2 simon:1 square:5 variance:5 characteristic:16 lkopf:5 rx:6 zx:3 cybernetics:3 explain:1 janzing:1 definition:1 pp:1 involved:1 naturally:1 proof:7 associated:1 mi:8 con:11 treatment:1 popular:1 ask:2 recall:1 knowledge:1 ut:1 dimensionality:1 hilbert:14 higher:3 supervised:3 follow:1 response:1 formulation:1 amer:1 strongly:2 furthermore:1 smola:4 correlation:6 sketch:1 steinwart:1 nonlinear:2 contain:1 normalized:5 y2:4 former:1 regularization:2 hence:1 equality:1 read:1 self:1 uniquely:1 clearance:2 unambiguous:1 percentile:1 kak:1 criterion:3 generalized:1 complete:2 demonstrate:1 chaos:1 instantaneous:1 recently:1 scho:4 rotation:1 permuted:1 functional:1 physical:1 jp:1 nocco:2 analog:1 discussed:1 marginals:2 expressing:1 consistency:3 mathematics:1 similarly:3 stable:1 multivariate:1 thresh:2 moderate:1 verlag:2 ubingen:3 inequality:1 yi:3 integrable:1 minimum:1 ey:1 determine:1 converge:2 ogbauer:1 ii:2 desirable:1 gretton:7 infer:1 academic:1 cross:8 bach:4 laplacian:3 basic:1 regression:1 patient:1 expectation:1 metric:1 kernel:66 represent:2 condi:2 justified:1 addition:1 fine:1 completes:1 median:6 sch:2 unlike:2 eigenfunctions:2 hz:3 flow:1 jordan:3 call:2 noting:2 kraskov:1 split:1 easy:6 independence:27 fit:1 zi:1 opposite:1 regarding:1 idea:1 whether:2 expression:6 motivated:1 song:1 speaking:1 cause:3 york:1 remark:2 useful:1 vby:5 varb:2 generate:1 exist:1 vy:38 canonical:2 estimated:1 yy:2 discrete:2 express:2 four:1 sum:5 angle:3 extends:1 throughout:1 family:1 k2hs:5 reasonable:2 vb:4 capturing:3 pxy:8 nonnegative:1 strength:2 hilbertschmidt:1 precisely:1 x2:5 encodes:2 hy:6 bousquet:2 fourier:2 argument:1 min:1 concluding:1 px:22 sampleproblem:1 partitioned:1 taken:1 scheinberg:1 discus:3 letting:1 generalizes:2 rewritten:2 multiplied:1 apply:3 observe:1 simulating:1 schmidt:11 rkhss:4 rz:5 substitute:1 denotes:1 remaining:1 graphical:6 yx:3 ism:1 yz:1 especially:1 classical:1 objective:1 already:1 dependence:30 said:2 mx:1 distance:1 topic:1 evaluate:2 tuebingen:3 orp:1 y4:1 reed:1 ratio:2 equivalently:1 relate:2 perform:1 finite:4 extended:2 saturates:1 y1:6 reproducing:3 arbitrary:1 enon:2 pair:1 z1:2 california:1 established:1 nip:2 trans:1 able:1 below:1 appeared:1 saturation:1 max:3 including:1 analogue:1 advanced:1 brief:1 gz:3 coupled:1 review:2 literature:1 l2:12 kf:2 determining:1 asymptotic:1 law:1 parseval:1 permutation:9 expect:1 interesting:2 analogy:2 var:5 remarkable:2 h2:2 contingency:5 degree:2 sufficient:1 consistent:1 uncorrelated:1 free:3 infeasible:1 denseness:1 institute:4 wide:1 neighbor:3 distributed:1 dimension:2 xn:2 gram:1 rich:1 kz:2 commonly:1 compact:5 bernhard:2 xi:3 continuous:3 urine:1 table:12 ku:1 du:1 expansion:4 domain:1 significance:2 main:1 dense:5 whole:1 noise:1 minato:1 x1:9 causality:3 borel:3 explicit:2 candidate:2 third:1 theorem:12 xt:13 showing:1 symbol:1 pz:2 admits:1 decay:1 svm:1 concern:1 incorporating:1 exists:2 ih:2 consist:1 effectively:2 corr:3 supplement:1 kx:7 intern:2 ez:3 scalar:1 holland:1 springer:3 khs:4 satisfies:2 determines:1 conditional:13 ncon:12 infinite:3 typical:1 uniformly:1 lemma:3 called:3 accepted:3 experimental:1 cholesky:1 mark:1 latter:1 kgv:2 support:1 absolutely:1 dept:1 ex:2 |
2,582 | 3,341 | Selecting Observations against Adversarial Objectives
Andreas Krause
SCS, CMU
H. Brendan McMahan
Google, Inc.
Carlos Guestrin
SCS, CMU
Anupam Gupta
SCS, CMU
Abstract
In many applications, one has to actively select among a set of expensive observations before making an informed decision. Often, we want to select observations
which perform well when evaluated with an objective function chosen by an adversary. Examples include minimizing the maximum posterior variance in Gaussian
Process regression, robust experimental design, and sensor placement for outbreak
detection. In this paper, we present the Submodular Saturation algorithm, a simple and efficient algorithm with strong theoretical approximation guarantees for
the case where the possible objective functions exhibit submodularity, an intuitive
diminishing returns property. Moreover, we prove that better approximation algorithms do not exist unless NP-complete problems admit efficient algorithms.
We evaluate our algorithm on several real-world problems. For Gaussian Process
regression, our algorithm compares favorably with state-of-the-art heuristics described in the geostatistics literature, while being simpler, faster and providing
theoretical guarantees. For robust experimental design, our algorithm performs
favorably compared to SDP-based algorithms.
1
Introduction
In tasks such as sensor placement for environmental temperature monitoring or experimental design, one has to select among a large set of possible, but expensive, observations. Often, there are
several different objective functions which we want to simultaneously optimize. For example, in
the environmental monitoring problem, we want to minimize the marginal posterior variance of our
temperature estimate at all locations simultaneously. In experimental design, we often have uncertainty about the model parameters, and we want our experiments to be informative no matter what
the true parameters of the model are. These problems can be interpreted as a game: We select a
set of observations (sensor locations, experiments), and an adversary selects an objective function
(location to evaluate predictive variance, model parameters etc.) to test us on. Often, the individual
objective functions (e.g., the marginal variance at one location, or the information gain for a fixed set
of parameters [1, 2]) satisfy submodularity, an intuitive diminishing returns property: Adding a new
observation helps less if we have already made many observations, and more if we have made few
observation thus far. While NP-hard, the problem of selecting an optimal set of k observations maximizing a single submodular objective can be approximately solved using a simple greedy forwardselection algorithm, which is guaranteed to perform near-optimally [3]. However, as we show, this
simple myopic algorithm performs arbitrarily badly in the case of an adversarially chosen objective.
In this paper, we address this problem. In particular: (1) We present S ATURATE, an efficient
algorithm for settings where an adversarially-chosen submodular objective function must be
optimized. Our algorithm guarantees solutions which are at least as informative as the optimal
solution, at only a slightly higher cost. (2) We prove that our approximation guarantee is best
possible and cannot be improved unless NP-complete problems admit efficient algorithms. (3) We
extensively evaluate our algorithm on several real-world tasks, including minimizing the maximum
posterior variance in Gaussian Process regression, finding experiment designs which are robust
with respect to parameter uncertainty, and sensor placement for outbreak detection.
2
The adversarial observation selection problem
Observation selection with a single submodular objective. Observation selection problems can
often be modeled using set functions: We have a finite set V of observations to choose from, and
1
a utility function F which assigns a real number F (A) to each A ? V, quantifying its informativeness. In many settings, such as the ones described above, the utility F exhibits the property of
submodularity: adding an observation helps more, the fewer observations made so far [2]. Formally,
F is submodular [3] if, for all A ? B ? V and s ? V \ B, it holds that F (A ? {s}) ? F (A) ?
F (B ? {s}) ? F (B); F is monotonic if for all A ? B ? V it holds that F (A) ? F (B), and F is
normalized if F (?) = 0. Hence, many observation selection problems can be formalized as
max F (A),
A?V
subject to
|A| ? k,
(2.1)
where F is normalized, monotonic and submodular, and k is a bound on the number of observations
we can make. Since solving the problem (2.1) is generally NP-hard [4], in practice heuristics are
often used. One such heuristic is the greedy algorithm. This algorithm starts with the empty set, and
iteratively adds the element s? = argmaxs?V\A F (A ? {s}), until k elements have been selected.
Perhaps surprisingly, a fundamental result by Nemhauser et. al. [3] states that for submodular
functions, the greedy algorithm achieves a constant factor approximation: The set AG obtained by
the greedy algorithm achieves at least a constant fraction (1 ? 1/e) of the objective value obtained
by the optimal solution, i.e., F (AG ) ? (1 ? 1/e) max|A|?k F (A). Moreover, no polynomial time
algorithm can provide a better approximation guarantee unless P = NP [4].
Observation selection with adversarial objectives. In many applications (such as those discussed below), one wants to simultaneously optimize multiple objectives. Here, we are given a
collection of monotonic submodular functions F1 , . . . , Fm , and we want to solve
max min Fi (A),
A?V
i
subject to
|A| ? k.
(2.2)
Problem (2.2) can be considered a game: First, we (the max-player) select a set of observations A,
and then our opponent (the min-player) selects a criterion Fi to test us on. Our goal is to select a
set A of observations which performs well against an opponent who chooses the worst possible Fi
knowing our choice A. Thereby, we try to find a pure equilibrium to a sequential game on a matrix,
with one row per A, and one column per Fi . Note, that even if the Fi are all submodular, G(A) =
mini Fi (A) is not submodular. In fact, we show below that, in this setting, the simple greedy algorithm (which performs near-optimally in the single-criterion setting) can perform arbitrarily badly.
Examples of adversarial observation selection problems. We consider three instances of
adversarial selection problems. Sec. 4 provides more details and experimental results for these
domains. Several more examples are presented in the longer version of this paper [5].
Minimizing the maximum Kriging variance. Consider a Gaussian Process (GP) [6] XV defined
over a finite set of locations (indices) V. Hereby, XV is a set of random variables, one variable
Xs for each location s ? V. Given a set of locations A ? V which we observe, we can compute
the predictive distribution P (XV\A | XA = xA ), i.e., the distribution of the variables XV\A at the
unobserved locations V \ A, conditioned on the measurements at the selected locations, XA = xA .
2
Let ?s|A
be the residual variance after making observations at A. Let ?AA be the covariance
matrix of the measurements at the chosen locations A, and ?sA be the vector of cross-covariances
2
between the measurements at s and A. Then, the variance ?s|A
= ?s2 ? ?sA ??1
AA ?As depends
only on the set A, and not on the observed values xA . Assume that the a priori variance ?s2 is
constant for all locations s (in Sec. 3, we show our approach generalizes to non-constant marginal
variances). We want to select locations A such that the maximum marginal variance is as small as
2
possible. Equivalently, we can define the variance reduction Fs (A) = ?s2 ? ?s|A
, and desire that
the minimum variance reduction over all locations s is as large as possible. Das and Kempe [1]
show that, in many practical cases, the variance reduction Fs is a monotonic submodular function.
Robust experimental designs. Another application is experimental design under nonlinear dynamics
[7]. The goal is to estimate a set of parameters ? of a nonlinear function y = f (x, ?) + w, by
providing a set of experimental stimuli x, and measuring the (noisy) response y. In many cases,
experimental design for linear models (where y = A(x)T ? + w) with Gaussian noise w can be
efficiently solved [8]. In the nonlinear case, the common approach is to linearize f around an initial
parameter estimate ?0 , i.e., y = f (x, ?0 ) + V (x)(? ? ?0 ) + w, where V (x) is the Jacobian of f with
respect to the parameters ?, evaluated at ?0 . In [7], it was shown that the efficiency of the design
can be very sensitive with respect to the initial parameter estimates ?0 . Consequently, they develop
an efficient semi-definite program (SDP) for E-optimal design (i.e., the goal is to minimize the
maximum eigenvalue of the error covariance) which is robust against perturbations of the Jacobian
2
V . However, it might be more natural to directly consider robustness with respect to perturbation of
the initial parameter estimates ?0 , around which the linearization is performed. We show how to find
(Bayesian A-optimal) designs which are robust against uncertainty in these parameter estimates.
In this setting, the objectives F?0 (A) are the reductions of the trace of the parameter covariance,
(? )
(?0 )
F?0 (A) = tr(?? 0 ) ? tr(??|A
), where ?(?0 ) is the joint covariance of observations and parameters
after linearization around ?0 ; thus, F?0 is the sum of marginal parameter variance reductions, which
are individually monotonic and (often) submodular [1], and so F?0 is monotonic and submodular as
well. Hence, in order to find a robust design, we maximize the minimum variance reduction, where
the minimum is taken over (a discretization into a finite subset of) all initial parameter values ?0 .
Sensor placement for outbreak detection. Another class of examples are outbreak detection problems on graphs, such as contamination detection in water distribution networks [9]. Here, we are
given a graph G = (V, E), and a phenomenon spreading dynamically over the graph. We define a set
of intrusion scenarios I; each scenario i ? I models an outbreak (e.g., spreading of contamination)
starting from a given node s ? V in the network. By placing sensors at a set of locations A ? V,
we can detect such an outbreak, and incur a utility Fi (A) (e.g., reduction in detection time or
population affected). In [9], it was shown that these utilities Fi are monotonic and submodular
for a large class of utility functions. In the adversarial setting, the adversary observes our sensor
placement A, and then decides on an intrusion i for which our utility Fi (A) is as small as possible.
Hence, our goal is to find a placement A which performs well against such an adversarial opponent.
Hardness of the adversarial observation selection problem. Given the near-optimal performance of the greedy algorithm for the single-objective problem, a natural question is if the performance guarantee generalizes to the more complex adversarial setting. Unfortunately, this is far
from true. Consider the case with two submodular functions, F1 and F2 , where the set of observations is V = {s1 , s2 , t1 , t2 }. We set F1 (?) = F2 (?) = 0, and define F1 (A) = 1 if s1 ? A, otherwise
? times the number of ti contained in A. Similarly, if s2 ? A, we set F2 (A) = 1, otherwise ? times
the number of ti contained in A. Both F1 and F2 are submodular and monotonic. Optimizing for
a set of 2 elements, the greedy algorithm maximizing G(A) = min{F1 (A), F2 (A)} would choose
the set {t1 , t2 }, since such choice increases G by 2?, whereas adding si would not increase the score.
However, the optimal solution with k = 2 is {s1 , s2 }, with a score of 1. Hence, as ? ? 0, the greedy
algorithm performs arbitrarily worse than the optimal solution. Our next hope would be to obtain a
different good approximation algorithm. However, we can show that most likely this is not possible:
Theorem 1. Unless P = NP, there cannot exist any polynomial time approximation algorithm for
Problem (2.2). More precisely: Let n be the size of the problem instance, and ?(?) > 0 be any
positive function of n. If there exists a polynomial-time algorithm which is guaranteed to find a set
A0 of size k such that mini Fi (A0 ) ? ?(n) max|A|?k mini Fi (A), then P = NP.
Thus, unless P = NP, there cannot exist any algorithm which is guaranteed to provide, e.g., even an
exponentially small fraction (?(n) = 2?n ) of the optimal solution. All proofs can be found in [5].
3
The Submodular Saturation Algorithm
Since Theorem 1 rules out any approximation algorithm which respects the constraint k on the size
of the set A, our only hope for non-trivial guarantees requires us to relax this constraint. We now
present an algorithm that finds a set of observations which perform at least as well as the optimal
set, but at slightly increased cost; moreover, we show that no efficient algorithms can provide better
guarantees (under reasonable complexity-theoretic assumptions). For now we assume all Fi take
only integral values; this assumption is relaxed later. The key idea is to consider the following
alternative formulation:
max c,
c,A
subject to
c ? Fi (A) for 1 ? i ? m and |A| ? ?k.
(3.1)
We want a set A of size at most ?k, such that Fi (A) ? c for all i, and c is as large as possible.
Here ? ? 1 is a parameter relaxing the constraint on |A|: if ? = 1, we recover the original
problem (2.2). We solve program (3.1) as follows: For each value c, we find the cheapest set A
with Fi (A) ? c for all i. If this cheapest set has at most ?k elements, then c is feasible. A binary
search on c allows us to find the optimal solution with the maximum feasible c. We first show how
to approximately solve Equation (3.1) for a fixed c. For c > 0 define Fbi,c (A) = min{Fi (A), c},
the original function Fi truncated at score level c; these Fbi,c functions are also submodular [10].
3
GPC (F c , c)
A ? ?;
while F c (A) < c do
foreach s ? V \ A do ?s ? F c (A ? {s}) ? F c (A);
A ? A ? {argmaxs ?s };
Algorithm 1: The greedy submodular partial cover (GPC) algorithm.
S ATURATE (F1 , . . . , Fm , k, ?)
cmin ? 0; cmax ? mini Fi (V); Abest ? ?;
1
do
while (cmax ? cmin ) ? m
P
1
c ? (cmin + cmax )/2; ?A define F c (A) ? m
i min{Fi (A), c}; A ? GP C(F c , c);
if |A| > ?k then cmax ? c; else cmin ? c; Abest = A ;
Algorithm 2: The Submodular Saturation algorithm.
P b
1
Let F c (A) = m
i Fi,c (A) be their average value; submodular functions are closed under convex
combinations, so F c is submodular and monotonic. Furthermore, Fi (A) ? c for all 1 ? i ? m
if and only if F c (A) = c. Hence, in order to determine whether some c is feasible, we solve a
submodular covering problem:
Ac = argminA?V |A|, such that F c (A) = c.
(3.2)
Such problems are NP-hard in general [4], but in [11] it is shown that the greedy algorithm (c.f.,
Algorithm 1) achieves near-optimal performance on this problem. Using this result, we find:
Lemma 2. Given monotonic submodular functions F1 , . . . , Fm and a (feasible) constant c, Algo?
rithm 1 (with input F c ) finds a set AG such that Fi (AP
G ) ? c for all i, and |AG | ? ?|A |, where
?
A is the optimal solution, and ? = 1 + log (maxs?V i Fi (s)) ? 1 + log m maxs?V F c (s) 1 .
We can compute this approximation guarantee ? for any given instance of the adversarial observation selection problem. Hence, if for a given value of c the greedy algorithm returns a set
of size greater than ?k, there cannot exist a solution A0 with |A0 | ? k with Fi (A0 ) ? c for all
i; thus, the optimal solution to the adversarial observation selection problem must be less than
c. We can use this argument to conduct a binary search to find the optimal value of c. We call
Algorithm 2, which formalizes this procedure, the submodular saturation algorithm (S ATURATE),
as the algorithm considers the truncated objectives Fbi,c , and chooses sets which saturate all these
objectives. Theorem 3 (given below) states that S ATURATE is guaranteed to find a set which
achieves adversarial score mini Fi at least as high as the optimal solution, if we allow the set to be
logarithmically larger than the optimal solution.
Theorem 3. For any integer k, S ATURATE finds a solution ASPsuch that mini Fi (AS ) ?
max|A|?k mini Fi (A) and |AS | ? ?k, for ? = 1 + log (maxs?V i Fi (s)). The total number
P
of submodular function evaluations is O |V|2 m log( i Fi (V)) .
Note, that the algorithm
still makes sense for any value of ?.
However, if ? <
P
1 + log (maxs?V i Fi (s)), the guarantee of Theorem 3 does not hold. If we had an exact
algorithm for submodular coverage, ? = 1 would be the correct choice. Since the greedy algorithm
solves submodular coverage very effectively, in our experiments, we call S ATURATE with ? = 1,
which empirically performs very well. The worst-case running time guarantee is quite pessimistic,
and in practice the algorithm is much faster: Using a priority queue and lazy evaluations, Algorithm 1 can be sped up drastically (c.f., [12] for details). Furthermore, in practical implementations,
one would stop GPC once ?k + 1 elements have been selected, which already proves that the
optimal solution with k elements cannot achieve score c. Also, Algorithm 2 can be terminated once
cmax ? cmin is sufficiently small; in our experiments, 10-15 iterations usually sufficed.
One might ask, whether the guarantee on the size of the set, ?, can be improved. Unfortunately, this
is not likely, as the following Theorem shows:
Theorem 4. If there were a polynomial time algorithm which, for any integer k, is guaranteed
to find a solution AS such that mini Fi (AS ) ? max|A|?k mini Fi (A) and |AS | ? ?k, where
P
? ? (1 ? ?)(1 + log maxs?V i Fi (s)) for some fixed ? > 0, then NP ? DTIME(nlog log n ).
1
This bound is only meaningful for integral Fi , otherwise it could be arbitrarily improved by scaling the Fi .
4
Hereby, DTIME(nlog log n ) is a class of deterministic, slightly superpolynomial (but subexponential) algorithms [4]; the inclusion NP ? DTIME(nlog log n ) is considered unlikely [4].
Extensions. We now show how the assumptions made in our presentation above can be relaxed.
Non-integral objectives. Most objective functions Fi in the observation selection setting are not
integral (e.g., marginal variances of GPs). If they take rational numbers, we can scale the objectives
by multiplying by their common denominator. If we allow small additive error, we can approximate
their values by their leading digits. An analysis similar to the one presented in [2] can be used to
bound the effect of this approximation on the theoretical guarantees obtained by the algorithm.
Non-constant thresholds. Consider the example of Minimax Kriging Designs for GP regression.
2
denote the variance reductions at location i. However, rather than
Here, the Fi (A) = ?i2 ? ?i|A
guaranteeing that Fi (A) ? c for all i (which, in this example, means that the minimum variance re2
duction is c), we want to guarantee that ?i|A
? c for all i. We can easily adapt our approach to handle
b
this case: Instead of defining Fi,c (A) = min{Fi (A), c}, we define Fbi,c (A) = min{Fi (A), ?i2 ? c},
and then again perform binary search over c, but searching for the smallest c instead. The algorithm,
using objectives modified in this way, will bear the same approximation guarantees.
Non-uniform observation costs. We can extend S ATURATE to the setting where different observations have different costs. Suppose a cost function g : V ? R+P
assigns each element s ? V a positive cost g(s); the cost of a set of observations is then g(A) = s?A g(s). The problem is to find
A? = maxA?V mini Fi (A) subject to g(A) ? B, where B > 0 is a budgetwe can spend on making observations. In this case, we use the rule ?s ? F c (A ? {s}) ? F c (A) /g(s) in Algorithm 1.
For this modified algorithm, Theorem 3 still holds, with |A| replaced by g(A) and k replaced by B.
4
Experimental Results
Minimax Kriging. We use S ATURATE to select observations in a GP to minimize the maximum
posterior variance. We consider Precipitation data from the Pacific Northwest of the United States
[13]. We discretize the space into 167 locations. In order to estimate variance reduction, we
consider the empirical covariance of 50 years of data, which we preprocessed as described in [2].
In the geostatistics literature, the predominant choice of optimization algorithms are carefully
tuned local search procedures, prominently simulated annealing (c.f., [14, 15]). We compare our
S ATURATE algorithm against a state-of-the-art implementation of such a simulated annealing (SA)
algorithm, first proposed by [14]. We use an optimized implementation described recently by
[15]. This algorithm has 7 parameters which need to be tuned, describing the annealing schedule,
distribution of iterations among several inner loops, etc. We use the parameter settings as reported
by [15], and report the best result of the algorithm among 10 random trials. In order to compare
observation sets of the same size, we called S ATURATE with ? = 1.
Fig. 1(a) compares simulated annealing, S ATURATE, and the greedy algorithm which greedily
selects elements which decrease the maximum variance the most. We also used S ATURATE to
initialize the simulated annealing algorithm (using only a single run of simulated annealing, as
opposed to 10 random trials). S ATURATE obtains placements which are drastically better than
the placements obtained by the greedy algorithm. Furthermore, the performance is very close
to the performance of the simulated annealing algorithm. When selecting 30 and more sensors,
S ATURATE strictly outperforms the simulated annealing algorithm. Furthermore, as Fig. 1(b)
shows, S ATURATE is significantly faster than simulated annealing, by factors of 5-10 for larger
problems. When using S ATURATE in order to initialize the simulated annealing algorithm, the
resulting performance almost always resulted in the best solutions we were able to find, while
still executing faster than simulated annealing with 10 random restarts as proposed by [15]. These
results indicate that S ATURATE compares favorably to state-of-the-art local search heuristics, while
being faster, requiring no parameters to tune, and providing theoretical approximation guarantees.
Optimizing for the maximum variance could potentially be considered too pessimistic. Hence
we compared placements obtained by S ATURATE, minimizing the maximum marginal posterior
variance, with placements obtained by the greedy algorithm, where we minimize the average
marginal variance. Note, that, whereas the reduction of the maximum variance is non-submodular,
the average variance reduction is (often) submodular [1], and hence the greedy algorithm can be
expected to provide near-optimal placements. Fig. 1(c) presents the maximum and average marginal
variances for both algorithms. Our results show that if we optimize for the maximum variance
we still achieve comparable average variance. If we optimize for average variance however, the
5
3
Greedy
1.5
1
0.5
0
Simulated
Annealing (SA)
Saturate
Simulated
Annealing (SA)
300
Saturate+SA
200
Saturate
100
Saturate
+ SA
20
40
60
Number of sensors
100
(a) Algorithm comparison
0
0
10
20
30
40
50
Number of observations
(b) Running time
2
1.5
1
Avg. var.
opt. max.
(Saturate)
0.5
Greedy
80
Marginal variance
2
Max. var.
opt. avg.
Max. var.
(Greedy)
opt. max.
(Saturate)
2.5
400
Running time (s)
Maximum marginal variance
500
2.5
60
0
0
5
Avg. var.
opt. avg.
(Greedy)
10
15
Number of sensors
20
(c) Avg. vs max. variance
Figure 1: (a) S ATURATE, greedy and SA on the precipitation data. S ATURATE performs comparably with the
fine-tuned SA algorithm, and outperforms it for larger placements. (b) Running times for the same experiment.
(c) Optimizing for the maximum variance (using S ATURATE) leads to low average variance, but optimizing for
average variance (using greedy) does not lead to low maximum variance.
maximum posterior variance remains much higher. In the longer version of this paper [5], we
present results on two more real data sets, which are qualitatively similar to those discussed here.
Robust Experimental Design. We consider the robust design of experiments for the MichaelisMenten mass-action kinetics model, as discussed in [7]. The goal is least-square parameter
estimation for a function y = f (x, ?), where x is the chosen experimental stimulus (the initial
substrate concentration S0 ), and ? = (?1 , ?2 ) are two parameters as described in [7]. The stimulus
x is chosen from a menu of six options, x ? {1/8, 1, 2, 4, 8, 16}, each of which can be repeatedly
chosen. The goal is to produce a fractional design w = (w1 , . . . , w6 ), where each component wi
measures the relative frequency according to which the stimulus xi is chosen. Since f is nonlinear,
f is linearized around an initial parameter estimate ?0 = (?01 , ?02 ), and approximated by its
Jacobian V?0 . Classical experimental design considers the error covariance of the least squares
? Cov(?? | ?0 , w) = ? 2 (V T W V? )?1 , where W = diag(w), and aims to find designs
estimate ?,
0
?0
w which minimize this error covariance. E-optimality, the criterion adopted by [7], measures
smallness in terms of the maximum eigenvalue of the error covariance matrix. The optimal w can
be found using Semidefinite Programming (SDP) [8].
The estimate Cov(?? | ?0 , w) depends on the initial parameter estimate ?0 , where linearization is performed. However, since the goal is parameter estimation, a ?certain circularity is involved? [7]. To
avoid this problem, [7] find a design w? (?0 ) by solving a robust SDP which minimizes the error size,
subject to a worst-case (adversarially-chosen) perturbation ? on the Jacobian V?0 ; the robustness parameter ? bounds the spectral norm of ?. As evaluation criterion, [7] define a notion of efficiency,
which is the error size of the optimal design with correct initial parameter estimate, divided by the
error when using a robust design obtained at the wrong initial parameter estimates, i.e.,
efficiency ? ?max [Cov(?? | ?true , wopt (?true )))]/?max [Cov(?? | ?true , w? (?0 ))],
where wopt (?) is the E-optimal design for parameter ?. They show that for appropriately chosen
values of ?, the robust design is more efficient than the optimal design, if the initial parameter ?0
does not equal the true parameter.
While their results are very promising, an arguably more natural approach than perturbing the Jacobian would be to perturb the initial parameter estimate, around which linearization is performed.
E.g., if the function f describes a process, which behaves characteristically differently in different
?phases?, and the parameter ? controls which of the phases the process is in, then a robust design
should intuitively ?hedge? the design against the behavior in each possible phase. In such a case, the
uniform distribution (which the robust SDP chooses for large ?) would not be the most robust design.
If we discretize the space of possible parameter perturbations (within a reasonably chosen interval),
we can use S ATURATE to find robust experimental designs. While the classical E-optimality is not
submodular [2], Bayesian A-optimality is (often) submodular [1, 2]. Here, the goal is to minimize
the trace instead of eigenvalue size as error metric. Furthermore, we equip the parameters ? with an
uninformative normal prior (which we chose as diag([202 , 202 ])), and then minimize the expected
trace of the posterior error covariance, tr(??|A ). Hereby, A is a discrete design of 20 experiments,
where each option xi can be chosen repeatedly. In order to apply S ATURATE, for each ?, we define
2
). The normalization Z? is
F? (A) as the normalized variance reduction F? (A) = Z1? (??2 ? ??|A
0
chosen such that F? (A) = 1 if A = argmax|A0 |=20 F? (A ), i.e., if A is chosen to maximize only
F? . S ATURATE is then used to maximize the worst-case normalized variance reduction.
6
0.8
0.6
0.4 Saturate:
large interval
0.2 small interval
0
10
-1
SDP:
? = 16.3
? = 10-3
0
Classical
E-optimal
design
true ?2
10
Initial parameter estimate ?02
10
1
4
x 10
3000
2500
Maximum population affected
Efficiency (w.r.t. E-optimality)
C
Maximum detection time (minutes)
B
A
1
Greedy
2000
Simulated
Annealing
1500
1000
500
Saturate
0
0
5
10
15
20
Number of sensors
25
30
2
1.5
Greedy
Saturate
1
0.5
0
Simulated
Annealing
5
Saturate + SA
10
15
20
Number of sensors
25
30
(a) Robust experimental design
(b) [W] algorithms Z1
(c) [W] algorithms Z2
Figure 2: (a) Efficiency of robust SDP of [7] and S ATURATE on a biological experimental design problem.
For a large range of initial parameter estimates, S ATURATE outperforms the SDP solutions. (b,c) S ATURATE,
greedy and SA in the water network setting, when optimizing worst-case detection time (Z1 ) and affected
population (Z2 ). S ATURATE performs comparably to SA for Z2 and strictly outperforms SA for Z1 .
We reproduced the experiment of [7], where the initial estimate of the second component ?02 of ?0
was varied between 0 and 16, the ?true? value being ?2 = 2. For each initial estimate of ?02 , we
computed a robust design, using the SDP approach and using S ATURATE, and compared them using
the efficiency metric of [7]. We first optimized designs which are robust against a small perturbation
of the initial parameter estimate. For the SDP, we chose a robustness parameter ? = 10?3 , as
1
, ?(1 + ?)], discretized
reported in [7]. For S ATURATE, we considered an interval around [? 1+?
in a 5 ? 5 grid, with ? = .1. Fig. 2(a) shows three characteristically different regions, A, B, C,
separated by vertical lines. In region B which contains the true parameter setting, the E-optimal
design (which is optimal if the true parameter is known, i.e., ?02 = ?2 ) performs similar to both
robust methods. Hence, in region B (i.e., small deviation from the true parameter), robustness is not
really necessary. Outside of region B however, where the standard E-optimal design performs badly,
both robust designs do not perform well either. This is an intuitive result, as they were optimized to
be robust only to small parameter perturbations.
Consequently, we compared designs which are robust against a large parameter range. For SDP,
we chose ? = 16.3, which is the maximum spectral variation of the Jacobian when we consider all
initial estimates from ?02 varying between 0 and 16. For S ATURATE, we optimized a single design
which achieves the maximum normalized variance reduction over all values of ?02 between 0 and 16.
Fig. 2(a) shows, that in this case, the design obtained by S ATURATE achieves an efficiency of 69%,
whereas the efficiency of the SDP design is only 52%. In the regions A and C, the S ATURATE design
strictly outperforms the other robust designs. This experiment indicates that designs which are robust
against a large range of initial parameter estimates, as provided by S ATURATE, can be more efficient
than designs which are robust against perturbations of the Jacobian (the SDP approach).
Outbreak Detection. Consider a city water distribution network, delivering water to households
via a system of pipes, pumps, and junctions. Accidental or malicious intrusions can cause contaminants to spread over the network, and we want to select a few locations (pipe junctions) to install
sensors, in order to detect these contaminations as quickly as possible. In August 2006, the Battle
of Water Sensor Networks (BWSN) [16] was organized as an international challenge to find the best
sensor placements for a real (but anonymized) metropolitan water distribution network, consisting
of 12,527 nodes. In this challenge, a set of intrusion scenarios is specified, and for each scenario
a realistic simulator provided by the EPA [17] is used to simulate the spread of the contaminant
for a 48 hour period. An intrusion is considered detected when one selected node shows positive
contaminant concentration. BWSN considered a variety of impact measures, including the time
to detection (called Z1 ), and the size of the affected population calculated using a realistic disease
model (Z2 ). The goal of BWSN was to minimize the expectation of the impact measures Z1 and
Z2 given a uniform distribution over intrusion scenarios.
In this paper, we consider the adversarial setting, where an opponent chooses the contamination
scenario with knowledge of the sensor locations. The objective functions Z1 and Z2 are in fact submodular for a fixed intrusion scenario [9], and so the adversarial problem of minimizing the impact
of the worst possible intrusion fits into our model. For these experiments, we consider scenarios
which affect at least 10% of the network, resulting in a total of 3424 scenarios. Figures 2(b) and 2(c)
compare the greedy algorithm, S ATURATE and the simulated annealing (SA) algorithm for the problem of maximizing the worst-case detection time (Z1 ) and worst-case affected population (Z2 ).
Interestingly, the behavior is very different for the two objectives. For the affected population (Z2 ),
greedy performs reasonably, and SA sometimes even outperforms S ATURATE. For the detection
7
time (Z1 ), however, the greedy algorithm did not improve the objective at all, and SA performs
poorly. The reason is that for Z2 , the maximum achievable scores, Fi (V), vary drastically, since
some scenarios have much higher impact than others. Hence, there is a strong ?gradient?, as the
adversarial objective changes quickly when the high impact scenarios are covered. This gradient
allows greedy and SA to work well. On the contrary, for Z1 , the maximum achievable scores,
Fi (V), are constant, since all scenarios have the same simulation duration. Unless all scenarios are
detected, the worst-case detection time stays constant at the simulation length. Hence, many node
exchange proposals considered by SA, as well as the addition of a new sensor location by greedy,
do not change the adversarial objective, and the algorithms have no useful performance metric.
Similarly to the GP Kriging setting, our results show that optimizing the worst-case score leads to
reasonable performance in the average case score, but not necessarily vice versa.
5
Conclusions
In this paper, we considered the problem of selecting observations which are informative with respect to an objective function chosen by an adversary. We demonstrated how this class of problems
encompasses the problem of finding designs which minimize the maximum posterior variance in
Gaussian Processes regression, robust experimental design, and detecting events spreading over
graphs. In each of these settings, the individual objectives are submodular and can be approximated
well using, e.g., the greedy algorithm; the adversarial objective, however, is not submodular. We
proved that there cannot exist any approximation algorithm for the adversarial problem if the constraint on the observation set size must be exactly met, unless P = NP. Consequently, we presented
an efficient approximation algorithm, S ATURATE, which finds observation sets which are guaranteed to be least as informative as the optimal solution, and only logarithmically more expensive. In
a strong sense, this guarantee is the best possible. We extensively evaluated our algorithm on several
real-world problems. For Gaussian Process regression, we showed that S ATURATE compares favorably to state-of-the-art heuristics, while being simpler, faster, and providing theoretical guarantees.
For robust experimental design, S ATURATE performs favorably compared to SDP based approaches.
Acknowledgements This work was partially supported by NSF Grants No. CNS-0509383, CNS0625518, CCF-0448095, CCF-0729022, and a gift from Intel. Anupam Gupta and Carlos Guestrin
were partly supported by Alfred P. Sloan Fellowships, Carlos Guestrin by an IBM Faculty Fellowship and Andreas Krause by a Microsoft Research Graduate Fellowship.
References
[1] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In Manuscript, 2007.
[2] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: Theory,
efficient algorithms and empirical studies. In To appear in the JMLR, 2007.
[3] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of the approximations for maximizing submodular
set functions. Mathematical Programming, 14:265?294, 1978.
[4] U. Feige. A threshold of ln n for approximating set cover. J. ACM, 45(4), 1998.
[5] A. Krause, B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. Technical
report, CMU-ML-08-100, 2008.
[6] C. E. Rasmussen and C. K. I. Williams. Gaussian Process for Machine Learning. Adaptive Computation
and Machine Learning. MIT Press, 2006.
[7] P. Flaherty, M. Jordan, and A. Arkin. Robust design of biological experiments. In NIPS, 2006.
[8] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge UP, March 2004.
[9] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak
detection in networks. In KDD, 2007.
[10] T. Fujito. Approximation algorithms for submodular set cover with applications. TIEICE, 2000.
[11] L.A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2:385?393, 1982.
[12] T. G. Robertazzi and S. C. Schwartz. An accelerated sequential algorithm for producing D-optimal designs. SIAM Journal of Scientific and Statistical Computing, 10(2):341?358, March 1989.
[13] M. Widmann and C. S. Bretherton. 50 km resolution daily precipitation for the pacific northwest.
http://www.jisao.washington.edu/data sets/widmann/, May 1999.
[14] J. Sacks and S. Schiller. Statistical Decision Theory and Related Topics IV, Vol. 2. Springer, 1988.
[15] D. P. Wiens. Robustness in spatial studies ii: minimax design. Environmetrics, 16:205?217, 2005.
[16] A. Ostfeld, J. G. Uber, and E. Salomons. Battle of water sensor networks: A design challenge for engineers and algorithms. In 8th Symposium on Water Distribution Systems Analysis, 2006.
[17] L. A. Rossman. The epanet programmer?s toolkit for analysis of water distribution systems. In Annual
Water Resources Planning and Management Conference, 1999.
8
| 3341 |@word trial:2 faculty:1 version:2 polynomial:4 norm:1 achievable:2 km:1 simulation:2 linearized:1 covariance:10 thereby:1 tr:3 reduction:14 initial:18 contains:1 score:9 selecting:4 united:1 tuned:3 interestingly:1 outperforms:6 discretization:1 z2:9 si:1 must:3 additive:1 realistic:2 informative:4 kdd:1 v:1 greedy:32 fewer:1 selected:4 provides:1 detecting:1 node:4 location:19 simpler:2 mathematical:1 install:1 symposium:1 re2:1 prove:2 expected:2 hardness:1 behavior:2 planning:1 sdp:14 simulator:1 discretized:1 precipitation:3 provided:2 gift:1 moreover:3 mass:1 what:1 interpreted:1 minimizes:1 maxa:1 informed:1 dtime:3 finding:2 ag:4 unobserved:1 guarantee:18 formalizes:1 ti:2 exactly:1 wrong:1 schwartz:1 control:1 grant:1 appear:1 producing:1 arguably:1 before:1 t1:2 positive:3 local:2 xv:4 approximately:2 ap:1 might:2 abest:2 chose:3 dynamically:1 relaxing:1 salomon:1 range:3 graduate:1 practical:2 practice:2 definite:1 digit:1 procedure:2 empirical:2 significantly:1 boyd:1 cannot:6 close:1 selection:13 optimize:4 www:1 deterministic:1 demonstrated:1 maximizing:4 williams:1 starting:1 duration:1 convex:2 resolution:1 formalized:1 assigns:2 pure:1 rule:2 vandenberghe:1 menu:1 population:6 handle:1 searching:1 notion:1 variation:1 suppose:1 exact:1 substrate:1 gps:1 programming:2 arkin:1 element:8 logarithmically:2 expensive:3 approximated:2 observed:1 solved:2 worst:10 region:5 decrease:1 contamination:4 observes:1 kriging:4 disease:1 complexity:1 dynamic:1 singh:1 solving:2 algo:1 predictive:2 incur:1 efficiency:8 f2:5 easily:1 joint:1 differently:1 separated:1 duction:1 effective:1 detected:2 sc:3 outside:1 quite:1 heuristic:5 larger:3 solve:4 spend:1 relax:1 otherwise:3 cov:4 gp:5 noisy:1 reproduced:1 eigenvalue:3 nlog:3 argmaxs:2 loop:1 poorly:1 achieve:2 intuitive:3 empty:1 produce:1 guaranteeing:1 executing:1 help:2 linearize:1 develop:1 ac:1 sa:18 solves:1 strong:3 coverage:2 indicate:1 met:1 submodularity:3 correct:2 programmer:1 cmin:5 exchange:1 f1:8 really:1 opt:4 pessimistic:2 biological:2 extension:1 strictly:3 kinetics:1 hold:4 around:6 considered:8 sufficiently:1 normal:1 equilibrium:1 achieves:6 vary:1 smallest:1 estimation:2 spreading:3 sensitive:1 individually:1 vice:1 city:1 metropolitan:1 hope:2 mit:1 sensor:19 gaussian:9 always:1 aim:1 modified:2 rather:1 avoid:1 varying:1 fujito:1 indicates:1 intrusion:8 adversarial:18 brendan:1 greedily:1 detect:2 sense:2 unlikely:1 a0:6 diminishing:2 selects:3 among:4 subexponential:1 priori:1 art:4 spatial:1 kempe:2 initialize:2 marginal:11 equal:1 once:2 washington:1 adversarially:3 placing:1 np:12 stimulus:4 t2:2 report:2 few:2 others:1 simultaneously:3 resulted:1 individual:2 replaced:2 phase:3 argmax:1 consisting:1 cns:1 microsoft:1 detection:14 evaluation:3 predominant:1 circularity:1 semidefinite:1 myopic:1 integral:4 partial:1 necessary:1 daily:1 unless:7 conduct:1 iv:1 theoretical:5 leskovec:1 instance:3 column:1 increased:1 cover:3 measuring:1 epa:1 cost:8 deviation:1 subset:2 pump:1 uniform:3 too:1 optimally:2 reported:2 chooses:4 fundamental:1 international:1 siam:1 stay:1 contaminant:3 quickly:2 w1:1 again:1 management:1 opposed:1 choose:2 priority:1 worse:1 admit:2 leading:1 return:3 actively:1 sec:2 inc:1 matter:1 satisfy:1 sloan:1 wiens:1 depends:2 performed:3 try:1 later:1 closed:1 start:1 recover:1 carlos:3 sufficed:1 option:2 minimize:9 square:2 variance:43 who:1 efficiently:1 bayesian:2 comparably:2 monitoring:2 multiplying:1 against:11 frequency:1 involved:1 hereby:3 proof:1 gain:1 stop:1 rational:1 proved:1 ask:1 knowledge:1 fractional:1 organized:1 schedule:1 carefully:1 manuscript:1 higher:3 restarts:1 response:1 improved:3 formulation:1 evaluated:3 furthermore:5 xa:5 until:1 nonlinear:4 google:1 glance:1 perhaps:1 scientific:1 effect:1 normalized:5 true:11 requiring:1 ccf:2 hence:11 iteratively:1 i2:2 game:3 covering:2 criterion:4 complete:2 theoretic:1 performs:14 temperature:2 fi:44 recently:1 common:2 behaves:1 sped:1 empirically:1 superpolynomial:1 perturbing:1 exponentially:1 foreach:1 discussed:3 extend:1 measurement:3 versa:1 cambridge:1 grid:1 similarly:2 inclusion:1 submodular:39 had:1 sack:1 toolkit:1 longer:2 etc:2 add:1 argmina:1 posterior:8 showed:1 optimizing:6 scenario:13 certain:1 binary:3 arbitrarily:4 guestrin:6 minimum:4 greater:1 relaxed:2 determine:1 maximize:3 period:1 semi:1 ii:1 multiple:1 technical:1 faster:6 adapt:1 cross:1 divided:1 impact:5 regression:7 denominator:1 cmu:4 metric:3 expectation:1 iteration:2 normalization:1 sometimes:1 proposal:1 whereas:3 want:10 krause:5 fine:1 annealing:16 interval:4 else:1 uninformative:1 malicious:1 addition:1 fellowship:3 appropriately:1 subject:5 contrary:1 jordan:1 call:2 integer:2 near:6 variety:1 affect:1 fit:1 fm:3 andreas:2 idea:1 inner:1 knowing:1 whether:2 six:1 utility:6 f:2 queue:1 cause:1 action:1 repeatedly:2 generally:1 gpc:3 delivering:1 covered:1 tune:1 useful:1 extensively:2 http:1 exist:5 nsf:1 per:2 alfred:1 discrete:1 vol:1 affected:6 key:1 threshold:2 preprocessed:1 graph:4 fraction:2 sum:1 year:1 run:1 uncertainty:3 almost:1 reasonable:2 environmetrics:1 decision:2 scaling:1 comparable:1 bound:4 wopt:2 guaranteed:6 accidental:1 annual:1 badly:3 placement:14 precisely:1 constraint:4 simulate:1 argument:1 min:7 optimality:4 pacific:2 according:1 combination:1 march:2 battle:2 describes:1 slightly:3 feige:1 vanbriesen:1 wi:1 making:3 s1:3 outbreak:8 intuitively:1 taken:1 ln:1 equation:1 resource:1 remains:1 describing:1 adopted:1 generalizes:2 junction:2 opponent:4 apply:1 observe:1 fbi:4 spectral:2 anupam:2 robustness:5 alternative:1 faloutsos:1 original:2 northwest:2 running:4 include:1 cmax:5 household:1 perturb:1 prof:1 approximating:1 classical:3 objective:29 already:2 question:1 concentration:2 exhibit:2 nemhauser:2 gradient:2 flaherty:1 schiller:1 simulated:15 topic:1 considers:2 trivial:1 water:10 equip:1 reason:1 w6:1 characteristically:2 length:1 modeled:1 index:1 mini:10 providing:4 minimizing:5 equivalently:1 unfortunately:2 potentially:1 favorably:5 trace:3 design:52 implementation:3 perform:6 discretize:2 vertical:1 observation:40 finite:3 truncated:2 defining:1 perturbation:7 varied:1 august:1 specified:1 pipe:2 optimized:5 z1:10 hour:1 geostatistics:2 nip:1 address:1 able:1 adversary:4 below:3 usually:1 challenge:3 encompasses:1 saturation:4 program:2 including:2 max:20 event:1 natural:3 residual:1 minimax:3 smallness:1 improve:1 prior:1 literature:2 acknowledgement:1 relative:1 bear:1 wolsey:2 var:4 anonymized:1 informativeness:1 s0:1 ibm:1 row:1 surprisingly:1 supported:2 rasmussen:1 drastically:3 allow:2 calculated:1 world:3 made:4 collection:1 avg:5 qualitatively:1 adaptive:1 far:3 approximate:1 obtains:1 ml:1 decides:1 xi:2 search:5 promising:1 reasonably:2 robust:31 complex:1 necessarily:1 domain:1 da:2 diag:2 cheapest:2 did:1 spread:2 terminated:1 s2:6 noise:1 fig:5 intel:1 rithm:1 prominently:1 mcmahan:2 jmlr:1 jacobian:7 theorem:8 saturate:11 minute:1 x:1 gupta:3 exists:1 adding:3 sequential:2 effectively:1 linearization:4 conditioned:1 budget:1 likely:2 bwsn:3 lazy:1 desire:1 contained:2 partially:1 monotonic:10 springer:1 aa:2 environmental:2 acm:1 hedge:1 goal:9 presentation:1 quantifying:1 consequently:3 fisher:1 feasible:4 hard:3 change:2 lemma:1 engineer:1 total:2 called:2 partly:1 experimental:18 uber:1 player:2 meaningful:1 select:9 formally:1 combinatorica:1 accelerated:1 evaluate:3 phenomenon:1 |
2,583 | 3,342 | Collapsed Variational Inference for HDP
Yee Whye Teh
Gatsby Unit
University College London
Kenichi Kurihara
Dept. of Computer Science
Tokyo Institute of Technology
Max Welling
ICS
UC Irvine
ywteh@gatsby.ucl.ac.uk
kurihara@mi.cs.titech.ac.jp
welling@ics.uci.edu
Abstract
A wide variety of Dirichlet-multinomial ?topic? models have found interesting applications in recent years. While Gibbs sampling remains an important method of
inference in such models, variational techniques have certain advantages such as
easy assessment of convergence, easy optimization without the need to maintain
detailed balance, a bound on the marginal likelihood, and side-stepping of issues
with topic-identifiability. The most accurate variational technique thus far, namely
collapsed variational latent Dirichlet allocation, did not deal with model selection
nor did it include inference for hyperparameters. We address both issues by generalizing the technique, obtaining the first variational algorithm to deal with the
hierarchical Dirichlet process and to deal with hyperparameters of Dirichlet variables. Experiments show a significant improvement in accuracy.
1
Introduction
Many applications of graphical models have traditionally dealt with discrete state spaces, where
each variable is multinomial distributed given its parents [1]. Without strong prior knowledge on
the structure of dependencies between variables and their parents, the typical Bayesian prior over
parameters has been the Dirichlet distribution. This is because the Dirichlet prior is conjugate to
the multinomial, leading to simple and efficient computations for both the posterior over parameters
and the marginal likelihood of data. When there are latent or unobserved variables, the variational
Bayesian approach to posterior estimation, where the latent variables are assumed independent from
the parameters, has proven successful [2].
In recent years there has been a proliferation of graphical models composed of a multitude of multinomial and Dirichlet variables interacting in various inventive ways. The major classes include the
latent Dirichlet allocation (LDA) [3] and many other topic models inspired by LDA, and the hierarchical Dirichlet process (HDP) [4] and many other nonparametric models based on the Dirichlet
process (DP). LDA pioneered the use of Dirichlet distributed latent variables to represent shades
of membership to different clusters or topics, while the HDP pioneered the use of nonparametric
models to sidestep the need for model selection.
For these Dirichlet-multinomial models the inference method of choice is typically collapsed Gibbs
sampling, due to its simplicity, speed, and good predictive performance on test sets. However there
are drawbacks as well: it is often hard to access convergence of the Markov chains, it is harder still
to accurately estimate the marginal probability of the training data or the predictive probability of
test data (if latent variables are associated with the test data), averaging topic-dependent quantities
based on samples is not well-defined because the topic labels may have switched during sampling
and avoiding local optima through large MCMC moves such as split and merge algorithms are tricky
to implement due to the need to preserve detailed balance. Thus there seems to be a genuine need to
consider alternatives to sampling.
For LDA and its cousins, there are alternatives based on variational Bayesian (VB) approximations
[3] and on expectation propagation (EP) [5]. [6] found that EP was not efficient enough for large
scale applications, while VB suffered from significant bias resulting in worse predictive performance
than Gibbs sampling. [7] addressed these issues by proposing an improved VB approximation based
on the idea of collapsing, that is, integrating out the parameters while assuming that other latent
variables are independent. As for nonparametric models, a number of VB approximations have
been proposed for DP mixture models [8, 9], while to our knowledge none has been proposed for
the HDP thus far ([10] derived a VB inference for the HDP, but dealt only with point estimates for
higher level parameters).
In this paper we investigate a new VB approach to inference for the class of Dirichlet-multinomial
models. To be concrete we focus our attention on an application of the HDP to topic modeling [4],
though the approach is more generally applicable. Our approach is an extension of the collapsed
VB approximation for LDA (CV-LDA) presented in [7], and represents the first VB approximation
to the HDP1 . We call this the collapsed variational HDP (CV-HDP). The advantage of CV-HDP
over CV-LDA is that the optimal number of variational components is not finite. This implies, apart
from local optima, that we can keep adding components indefinitely while the algorithm will take
care removing unnecessary clusters. Ours is also the first variational algorithm to treat full posterior
distributions over the hyperparameters of Dirichlet variables, and we show experimentally that this
results in significant improvements in both the variational bound and test-set likelihood. We expect
our approach to be generally applicable to a wide variety of Dirichlet-multinomial models beyond
what we have described here.
2
A Nonparametric Hierarchical Bayesian Topic Model
We consider a document model where each document in a corpus is modelled as a mixture over
topics, and each topic is a distribution over words in the vocabulary. Let there be D documents in
the corpus, and W words in the vocabulary. For each document d = 1, . . . , D, let ?d be a vector of
mixing proportions over topics. For each topic k, let ?k be a vector of probabilities for words in that
topic. Words in each document are drawn as follows: first choose a topic k with probability ?dk ,
then choose a word w with probability ?kw . Let xid be the ith word token in document d, and zid
its chosen topic. We have,
zid | ?d ? Mult(?d )
xid | zid , ?zid ? Mult(?zid )
(1)
We place Dirichlet priors on the parameters ?d and ?k ,
?d | ? ? Dir(??)
?k | ? ? Dir(?? )
(2)
where ? is the corpus-wide distribution over topics, ? is the corpus-wide distribution over the vocabulary, and ? and ? are concentration parameters describing how close ?d and ?k are to their
respective prior means ? and ? .
If the number of topics K is finite and fixed, the above model is LDA. As we usually do not know
the number of topics a priori, and would like a model that can determine this automatically, we
consider a nonparametric extension reposed on the HDP [4]. Specifically, we have a countably infinite number of topics (thus ?d and ? are infinite-dimensional vectors), and we use a stick-breaking
representation [11] for ?:
Qk?1
?k = ?
?k l=1 (1 ? ?
?l )
?
?k |? ? Beta(1, ?)
for k = 1, 2, . . .
(3)
In the normal Dirichlet processP
notation, we would equivalently
have Gd ? DP(?, G0 ) and G0 ?
P?
?
DP(?, Dir(?? )), where Gd = k=1 ?dk ??k and G0 = k=1 ?k ??k are sums of point masses, and
Dir(?? ) is the base distribution. Finally, in addition to the prior over ?, we place priors over the
other hyperparameters ?, ?, ? and ? of the model as well,
? ? Gamma(a? , b? )
? ? Gamma(a? , b? )
? ? Gamma(a? , b? )
? ? Dir(a? )
(4)
The full model is shown graphically in Figure 1(left).
1
In this paper, by HDP we shall mean the two level HDP topic model in Section 2. We do not claim to have
derived a VB inference for the general HDP in [4], which is significantly more difficult; see final discussions.
?
?
?
?d
zid
xid
words i=1...nd
?
?
?
?k
?
?
?
sd
?
?
?d
zid
xid
?k
words i=1...nd
topics k=1...?
document d=1...D
tk
?
topics k=1...?
document d=1...D
Figure 1: Left: The HDP topic model. Right: Factor graph of the model with auxiliary variables.
3
Collapsed Variational Bayesian Inference for HDP
There is substantial empirical evidence that marginalizing out variables is helpful for efficient inference. For instance, in [12] it was observed that Gibbs sampling enjoys better mixing, while in [7] it
was shown that variational inference is more accurate in this collapsed space. In the following we
will build on this experience and propose a collapsed variational inference algorithm for the HDP,
based upon first replacing the parameters with auxiliary variables, then effectively collapsing out the
auxiliary variables variationally. The algorithm is fully Bayesian in the sense that all parameter posteriors are treated exactly and full posterior distributions are maintained for all hyperparameters. The
only assumptions made are independencies among the latent topic variables and hyperparameters,
and that there is a finite upper bound on the number of topics used (which is found automatically).
The only inputs required of the modeller are the values of the top-level parameters a? , b? , ....
3.1
Replacing parameters with auxiliary variables
In order to obtain efficient variational updates, we shall replace the parameters ? = {?d } and ? =
{?k } with auxiliary variables. Specifically, we first integrate out the parameters; this gives a joint
distribution over latent variables z = {zid } and word tokens x = {xid } as follows:
p(z, x|?, ?, ?, ?, ? ) =
D
Y
?(?)
?(?+nd?? )
QK
?(??k +ndk? )
k=1
?(??k )
d=1
K
Y
?(?)
?(?+n?k? )
QW
w=1
?(??w +n?kw )
?(??w )
(5)
k=1
with ndkw = #{i : xid = w, zid = k}, dot denoting sum over that index, and K denoting an index
such that zid ? K for all i, d. The ratios of gamma functions in (5) result from the normalization
constants of the Dirichlet densities of ? and ?, and prove to be nuisances for updating the hyperparameter posteriors. Thus we introduce four sets of auxiliary variables: ?d and ?k taking values in
[0, 1], and sdk and tkw taking integral values. This results in a joint probability distribution over an
expanded system,
p(z, x, ?, ?, s, t|?, ?, ?, ?, ? )
=
D
Y
Q
ndk?
sdk
?d??1 (1??d )nd?? ?1 K
k=1 [ sdk ](??k )
?(nd?? )
d=1
K
Y
Q
n?kw
??1
tkw
?k
(1??k )n?k? ?1 W
w=1 [ tkw ](??w )
?(n?k? )
(6)
k=1
n
where [m
] are unsigned Stirling numbers of the first kind, and bold face letters denote sets of the
corresponding variables. It can be readily verified that marginalizing out ?, ?, s and t reduces (6)
to (5). The main insight is that conditioned on z and x the auxiliary variables are independent and
have well-known distributions. Specifically, ?d and ?k are Beta distributed, while sdk (respectively
tkw ) is the random number of occupied tables in a Chinese restaurant process with ndk? (respectively
n?kw ) customers and a strength parameter of ??k (respectively ??w ) [13, 4].
3.2
The Variational Approximation
We assume the following form for the variational posterior over the auxiliary variables system:
q(z, ?, ?, s, t, ?, ?, ?, ?, ?) = q(?)q(?)q(?)q(? )q(?)q(?, ?, s, t|z)
D n
d??
Y
Y
q(zid )
(7)
d=1 i=1
where the dependence of auxiliary variables on z is modelled exactly. [7] showed that modelling
exactly the dependence of a set of variables on another set is equivalent to integrating out the first
set. Thus we can interpret (7) as integrating out the auxiliary variables with respect to z. Given the
above factorization, q(?) further factorizes so that the ?
?k ?s are independent, as do the posterior over
auxiliary variables.
For computational tractability, we also truncated our posterior representation to K topics. Specifically, we assumed that q(zid > K) = 0 for every i and d. A consequence is that observations
have no effect on ?
?k and ?k for all k > K, and these parameters can be exactly marginalized out.
Notice that our approach to truncation is different from that in [8], who implemented a truncation
at T by instead fixing the posterior for the stick weight q(vT = 1) = 1, and from that in [9], who
assumed that the variational posteriors for parameters beyond the truncation level are set at their
priors. Our truncation approximation is nested like that in [9], and unlike that in [8]. Our approach
is also simpler than that in [9], which requires computing an infinite sum which is intractable in the
case of HDPs. We shall treat K as a parameter of the variational approximation, possibly optimized
by iteratively splitting or merging topics (though we have not explored these in this paper; see discussion section). As in [9], we reordered the topic labels such that E[n?1? ] > E[n?2? ] > ? ? ? . An
expression for the variational bound on the marginal log-likelihood is given in appendix A.
3.3
Variational Updates
In this section we shall derive the complete set of variational updates for the system. In the following
E[y] denotes the expectation of y, G[y] = eE[log y] the geometric expectation, and V[y] = E[y 2 ] ?
E[y]2 the variance. Let ?(y) = ? log?y?(y) be the digamma function. We shall also employ index
summation shorthands: ? sums out that index, while >l sums over i where i > l.
Hyperparameters. Updates for the hyperparameters are derived using the standard fully factorized
variational approach, since they are assumed independent from each other and from other variables.
For completeness we list these here, noting that ?, ?, ? are gamma distributed in the posterior, ?
?k ?s
are beta distributed, and ? is Dirichlet distributed:
P
q(?) ? ?a? +E[s?? ]?1 e??(b? ?
q(?) ? ?
d
E[log ?d ])
P
a? +E[t?? ]?1 ??(b? ? k E[log ?k ])
e
E[s ]
q(?
?k ) ? ?
?k ?k (1 ? ?
?k )E[?]+E[s?>k ]?1
QW
q(? ) ? w=1 ?wa? +E[t?w ]?1
(8)
P
a? +K?1 ??(b? ? K
?k )]
k=1 E[log(1??
q(?) ? ?
e
In subsequent updates we will need averages and geometric averages of these quantities which can be
extracted using the following identities:
p(x) ? xa?1 e?bx ? E[x] = a/b, G[x] = e?(a) /b, p(x) ?
P
Q ak ?1
?(ak ) ?( k ak )
. Note also that the geometric expectations factorizes:
? G[xk ] = e
/e
k xk
Qk?1
G[??k ] = G[?]G[?k ], G[??w ] = G[?]G[?w ] and G[?k ] = G[?
?k ] l=1 G[1 ? ?
?l ].
Auxiliary variables. The variational posteriors for the auxiliary variables depend on z through the
counts ndkw . ?d and ?k are beta distributed. If ndk? = 0 then q(sdk = 0) = 1 otherwise q(sdk ) > 0
only if 1 ? sdk ? ndk? . Similarly for tkw . The posteriors are:
E[?]?1
q(?d |z) ? ?d
q(?k |z) ?
(1 ? ?d )nd?? ?1
dk?
q(sdk = m|z) ? [nm
] (G[??k ])m
n?k? ?1
?kw
[nm
] (G[??w ])m
E[?]?1
?k
(1
? ?k )
q(tkw = m|z) ?
(9)
To obtain expectations of the auxiliary variables in (8) we will have to average over z as well. For
?d this is E[log ?d ] = ?(E[?]) ? ?(E[?] + nd?? ) where nd?? is the (fixed) number of words in
document d. For the other auxiliary variables these expectations depend on counts which can take
on many values and a na??ve computation can be expensive. We derive computationally tractable
approximations based upon an improvement to the second-order approximation in [7]. As we see in
the experiments these approximations are very accurate. Consider E[log ?k ]. We have,
E[log ?k |z] = ?(E[?]) ? ?(E[?] + n?k? )
(10)
and we need to average over n?k? as well. [7] tackled a similar problem with log instead of ? using
a second order Taylor expansion to log. Unfortunately such an approximation failed to work in our
case as the digamma function ?(y) diverges much more quickly than log y at y = 0. Our solution
is to treat the case n?k? = 0 exactly, and apply the second-order approximation when n?k? > 0. This
leads to the following approximation:
(11)
E[log ?k ] ? P+ [n?k? ] ?(E[?]) ? ?(E[?] + E+ [n?k? ]) ? 12 V+ [n?k? ]?00 (E[?] + E+ [n?k? ])
where P+ is the ?probability of being positive? operator: P+ [y] = q(y > 0), and E+ [y], V+ [y] are
the expectation and variance conditional on y > 0. The other two expectations are derived similarly,
making use of the fact that sdk and tkw are distributionally equal to the random numbers of tables
in Chinese restaurant processes:
00
k ]+E+ [ndk? ])
E[sdk ] ? G[??k ]P+ [ndk? ] ?(G[??k ]+E+ [ndk? ])??(G[??k ])+ V+ [ndk? ]? (G[??
(12)
2
00
w ]+E+ [n?kw ])
E[tkw ] ? G[??w ]P+ [n?kw ] ?(G[??w ]+E+ [n?kw ])??(G[??w ])+ V+ [n?kw ]? (G[??
2
As in [7], we can efficiently track the relevant quantities above by noting that each count is a sum of
independent Bernoulli variables. Consider ndk? as an example. We keep track of three quantities:
P
P
P
E[ndk? ] = i q(zid = k) V[ndk? ] = i q(zid = k)q(zid 6= k) Z[ndk? ] = i log q(zid 6= k) (13)
Some algebraic manipulations now show that:
P+ [ndk? ] = 1 ? eZ[ndk? ]
E+ [ndk? ] =
E[ndk? ]
P+ [ndk? ]
V+ [ndk? ] =
V[ndk? ]
P+ [ndk? ]
? eZ[ndk? ] E+ [ndk? ]
(14)
Topic assignment variables. [7] showed that if the dependence of a set of variables, say A, on
another set of variables, say z, is modelled exactly, then in deriving the updates for z we may
equivalently integrate out A. Applying to our situation with A = {?, ?, s, t}, we obtain updates
similar to those in [7], except that the hyperparameters are replaced by either their expectations
or their geometric expectations, depending on which is used in the updates for the corresponding
auxiliary variables:
?id
?id ?1
q(zid = k) ?G G[??k ] + n?id
dk? G G[??xid ] + n?kxid G E[?] + n?k?
?id
?id ?1
?? G[??k ] + E[n?id
dk? ] G[??xid ] + E[n?kxid ] E[?] + E[n?k? ]
?id
V[n?id
V[n?id
V[ndk?
]
?kxid ]
?k? ]
(15)
exp ? 2(G[?? ]+E[n?id ])2 ? 2(G[??x ]+E[n?id ])2 + 2(E[?]+E[n?id ])2
k
4
dk?
id
?kxid
?k?
Experiments
We implemented and compared performances for 5 inference algorithms for LDA and HDP: 1)
variational LDA (V-LDA) [3], collapsed variational LDA (CV-LDA) [7], collapsed variational HDP
(CV-HDP, this paper), collapsed Gibbs sampling for LDA (G-LDA) [12] and the direct assignment
Gibbs sampler for HDP (G-HDP) [4].
We report results on the following 3 datasets: i) KOS (W = 6906, D = 3430, number of wordtokens N = 467, 714), ii) a subset of the Reuters dataset consisting of news-topics with a number
of documents larger than 300 (W = 4593, D = 8433, N = 566, 298), iii) a subset of the 20Newsgroups dataset consisting of the topics ?comp.os.ms-windows.misc?, ?rec.autos?, ?rec.sport.baseball?,
?sci.space? and ?talk.politics.misc? (W = 8424, D = 4716, N = 437, 850).
For G-HDP we use the released code at http://www.gatsby.ucl.ac.uk/?ywteh/research/software.html.
The variables ?, ? are not adapted in that code, so we fixed them at ? = 100 and ?w = 1/W
for all algorithms (see below for discussion regarding adapting these in CV-HDP). G-HDP was
initialized with either 1 topic (G-HDP1 ) or with 100 topics (G-HDP100 ). For CV-HDP we use
the following initialization: E[?] = G[?] = 100 and G[?w ] = 1/W (kept fixed to compare with
G-HDP), E[?] = a? /b? , G[?] = e?(a? ) /b? , E[?] = a? /b? , G[?k ] = 1/K and q(zij = k) ? 1 + u
with u ? U[0, 1]. We set2 hyperparameters a? , b? , a? , b? in the range between [2, 6], while a? , b?
was chosen in the range [5, 10] and a? in [30 ? 50]/W . The number of topics used in CV-HDP
was truncated at 40, 80, and 120 topics, corresponding to the number of topics used in the LDA
algorithms. Finally, for all LDA algorithms we used ? = 0.1, ? = 1/K.
2
We actually set these values using a fixed but somewhat elaborate scheme which is the reason they ended
up different for each dataset. Note that this scheme simply converts prior expectations about the number of
topics and amount of sharing into hyperparameter values, and that they were never tweaked. Since they always
ended up in these compact ranges and since we do not expect a strong dependence on their values inside these
ranges we choose to omit the details.
Performance was evaluated by comparing i) the in-sample (train) variational bound on the loglikelihood for all three variational methods and ii) the out-of-sample (test) log-likelihood for all five
methods. All inference algorithms were run on 90% of the words in each document while testset performance was evaluated on the remaining 10% of the words. Test-set log-likelihood was
computed as follows for the variational methods:
Q P ? ?
?? +E [n ]
?? +E [n
]
p(xtest ) =
?jk ?kxtest
??jk = k q jk?
??kw = w q ?kw
(16)
ij
k
?+Eq [nj?? ]
ij
?+Eq [n?k? ]
Note that we used estimated mean values of ?jk and ?kw [14]. For CV-HDP we replaced all hyperparameters by their expectations. For the Gibbs sampling algorithms, given S samples from the
posterior, we used:
PS P s s
Q
?s ? s +ns
??w +ns?kw
s
p(xtest ) = ij S1 s=1 k ?jk
?kxtest
?jk
?skw = ?+n
= ?sk+ns jk?
(17)
s
ij
j??
?k?
We used all samples obtained by the Gibbs sampling algorithms after an initial burn-in period; each
point in the predictive probabilities plots below is obtained from the samples collected thus far.
The results, shown in Figure 2, display a significant improvement in accuracy of CV-HDP over
CV-LDA, both in terms of the bound on the training log-likelihood as well as for the test-set loglikelihood. This is caused by the fact that CV-HDP is learning the variational distributions over the
hyperparameters. We note that we have not trained ? or ? for any of these methods. In fact, initial
results for CV-HDP show no additional improvement in test-set log-likelihood, in some cases even
a deterioration of the results. A second observation is that convergence of all variational methods
is faster than for the sampling methods. Thirdly, we see significant local optima effects in our
simulations. For example, G-HDP100 achieves the best results, better than G-HDP1 , indicating that
pruning topics is a better way than adding topics to escape local optima in these models and leads to
better posterior modes.
In further experiments we have also found that the variational methods benefit from better initializations due to local optima. In Figure 3 we show results when the variational methods were initialized
at the last state obtained by G-HDP100 . We see that indeed the variational methods were able to find
significantly better local optima in the vicinity of the one found by G-HDP100 , and that CV-HDP is
still consistently better than the other variational methods.
5
Discussion
In this paper we have explored collapsed variational inference for the HDP. Our algorithm is the first
to deal with the HDP and with posteriors over the parameters of Dirichlet distributions. We found
that the CV-HDP performs significantly better than the CV-LDA on both test-set likelihood and the
variational bound. A caveat is that CV-HDP gives slightly worse test-set likelihood than collapsed
Gibbs sampling. However, as discussed in the introduction, we believe there are advantages to
variational approximations that are not available to sampling methods. A second caveat is that our
variational approximation works only for two layer HDPs?a layer of group-specific DPs, and a
global DP tying the groups together. It would be interesting to explore variational approximations
for more general HDPs.
CV-HDP presents an improvement over CV-LDA in two ways. Firstly, we use a more sophisticated
variational approximation that can infer posterior distributions over the higher level variables in the
model. Secondly, we use a more sophisticated HDP based model with an infinite number of topics,
and allow the model to find an appropriate number of topics automatically. These two advances are
coupled, because we needed the more sophisticated variational approximation to deal with the HDP.
Along the way we have also proposed two useful technical tricks. Firstly, we have a new truncation
technique that guarantees nesting. As a result we know that the variational bound on the marginal
log-likelihood will reach its highest value (ignoring local optima issues) when K ? ?. This fact
should facilitate the search over number of topics or clusters, e.g. by splitting and merging topics, an
aspect that we have not yet fully explored, and for which we expect to gain significantly from in the
face of the observed local optima issues in the experiments. Secondly, we have an improved secondorder approximation that is able to handle the often encountered digamma function accurately.
An issue raised by the reviewers and in need of more thought by the community is the need for better
evaluation criteria. The standard evaluation criteria in this area of research are the variational bound
!7.2
!5.8
!6.8
!7.4
!6
!7
!7.6
!6.2
!7.2
!6.4
!7.8
!8
GHDP100
GHDP1
GLDA
CVHDP
CVLDA
VLDA
!7.4
!6.6
40
80
K
120
!7.2
40
80
K
120
40
!5.8
!6.8
!6
!7
!6.2
!7.2
!6.4
!7.4
!6.6
!7.6
!6.8
!7.8
80
K
120
!7.4
!7.6
GHDP100
GHDP1
GLDA
CVHDP
CVLDA
VLDA
!7.8
!8
0
4000 8000 12000
#steps
!7
0
4000 8000 12000
#steps
!7.6
!8
0
4000 8000 12000
#steps
!7.4
!6.4
!7.8
!7.6
!6.6
!8
!7.8
!6.8
!8.4
CVHDP
CVLDA
VLDA
!8
!8.2
!7
!8.2
40
80
K
120
40
80
K
120
40
80
K
120
Figure 2: Left column: KOS, Middle column: Reuters and Right column: 20Newsgroups. Top row:
log p(xtest ) as a function of K, Middle row: log p(xtest ) as a function of number of steps (defined as number of
iterations multiplied by K) and Bottom row: variational bounds as a function of K. Log probabilities are on a
per word basis. Shown are averages and standard errors obtained by repeating the experiments 10 times with
random restarts. The distribution over the number of topics found by G-HDP1 are: KOS: K = 113.2 ? 11.4,
Reuters: K = 60.4 ? 6.4, 20News: K = 83.5 ? 5.0. For G-HDP100 we have: KOS: K = 168.3 ? 3.9,
Reuters: K = 122.2 ? 5.0, 20News: K = 128.1 ? 6.6.
!6.6
log p(test) / N
variational bound
!7
!7&5
!8
!8&5
!9
GHDP100
!6.8
Gibbs init. CVHDP
Gibbs init. CVLDA
Gibbs init. VLDA
random init. CVHDP
random init. CVLDA
random init. VLDA
!7
!7.2
!7.4
!7.6
!7.8
0
5000
#ste,s
#0000
0
5000
10000
#steps
Figure 3: G-HDP100 initialized variational methods (K = 130), compared against variational methods initialized in the usual manner with K = 130 as well. Results were averaged over 10 repeats.
and the test-set likelihood. However both confound improvements to the model and improvements
to the inference method. An alternative is to compare the computed posteriors over latent variables
on toy problems with known true values. However such toy problems are much smaller than real
world problems, and inferential quality on such problems may be of limited interest to practitioners.
We expect the proliferation of Dirichlet-multinomial models and their many exciting applications to
continue. For some applications variational approximations may prove to be the most convenient
tool for inference. We believe that the methods presented here are applicable to many models of this
general class and we hope to provide general purpose software to support inference in these models
in the future.
A
Variational lower bound
P
E[log p(z,x|?,?,? )?log q(z)]?KL[q(?)kp(?)]?KL[q(?)kp(?)]? K
?k )kp(?
?k )]?KL[q(? )kp(? )]
(18)
k=1 KL[q(?
h
i
h
i P
h
i
P
P
P
?(G[?]G[?k ]+ndk? )
?(G[?]G[?w ]+n?kw )
?(E[?])
?(E[?])
= d log ?(E[?]+n ) + dk F log
+
F
log
+
F
log
k
kw
?(G[?]G[? ])
?(E[?]+n
)
?(G[?]G[? ])
d??
?log
?log
(b? ?
P
k
a? +E[s?? ]
?(a? )
d E[log ?d ])
G[?]E[s?? ] eE[?]
a
?(a? +E[s?? ])
b??
?k?
P
P
E[log
?
]
d
d ?
dk
w
Pnd
i=1
q(zid =k) log q(zid =k)
P
a +E[t?? ]
P
?(a? )
(b? ? k E[log ?k ]) ?
G[?]E[t?? ] eE[?] k E[log ?k ]
a?
?(a? +E[t?? ])
b
?
P
? k log
?(1+?+E[s?k ]+E[s?>k ])
?(?+E[t?? ])
G[?
?k ]E[s?k ] G[1??
?k ]E[s?>k ] ?log
??(1+E[s?k ])?(?+E[s?>k ])
?(?)
Q
?(??w )
E[t?w ]
w ?(??w +E[t?w ]) G[?w ]
where F[f (n)]=P+ [n](f (E+ [n])+ 12 V+ [n]f 00 (E+ [n])) is the improved second order approximation.
Acknowledgements
We thank the reviewers for thoughtful and constructive comments. MW was supported by NSF
grants IIS-0535278 and IIS-0447903.
References
[1] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic Networks and Expert
Systems. Springer-Verlag, 1999.
[2] M. J. Beal and Z. Ghahramani. Variational Bayesian learning of directed graphical models with hidden
variables. Bayesian Analysis, 1(4), 2006.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[5] T. P. Minka and J. Lafferty. Expectation propagation for the generative aspect model. In Proceedings of
the Conference on Uncertainty in Artificial Intelligence, volume 18, 2002.
[6] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. In Proceedings of the Conference on
Uncertainty in Artificial Intelligence, volume 20, 2004.
[7] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent
Dirichlet allocation. In Advances in Neural Information Processing Systems, volume 19, 2007.
[8] D. M. Blei and M. I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis,
1(1):121?144, 2006.
[9] K. Kurihara, M. Welling, and N. Vlassis. Accelerated variational DP mixture models. In Advances in
Neural Information Processing Systems, volume 19, 2007.
[10] P. Liang, S. Petrov, M. I. Jordan, and D. Klein. The infinite PCFG using hierarchical Dirichlet processes.
In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2007.
[11] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[12] T.L. Griffiths and M. Steyvers. A probabilistic approach to semantic representation. In Proceedings of
the 24th Annual Conference of the Cognitive Science Society, 2002.
[13] C. E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.
Annals of Statistics, 2(6):1152?1174, 1974.
[14] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003.
| 3342 |@word middle:2 seems:1 proportion:1 nd:8 simulation:1 xtest:4 harder:1 initial:2 zij:1 denoting:2 ours:1 document:11 comparing:1 yet:1 readily:1 subsequent:1 plot:1 update:8 generative:1 intelligence:2 xk:2 ith:1 indefinitely:1 blei:3 caveat:2 completeness:1 firstly:2 simpler:1 five:1 along:1 direct:1 beta:4 prove:2 shorthand:1 inside:1 manner:1 introduce:1 indeed:1 proliferation:2 nor:1 inspired:1 automatically:3 window:1 notation:1 tweaked:1 mass:1 qw:2 factorized:1 what:1 tying:1 kind:1 proposing:1 unobserved:1 ended:2 nj:1 guarantee:1 every:1 exactly:6 uk:2 tricky:1 unit:2 stick:2 omit:1 grant:1 positive:1 local:8 treat:3 sd:1 consequence:1 jakulin:1 ak:3 id:13 merge:1 burn:1 initialization:2 factorization:1 limited:1 range:4 averaged:1 directed:1 implement:1 area:1 empirical:2 mult:2 significantly:4 adapting:1 thought:1 word:13 integrating:3 inferential:1 convenient:1 griffith:1 close:1 selection:2 operator:1 unsigned:1 collapsed:14 applying:2 yee:1 www:1 equivalent:1 customer:1 reviewer:2 graphically:1 attention:1 simplicity:1 splitting:2 insight:1 nesting:1 deriving:1 steyvers:1 handle:1 traditionally:1 annals:1 pioneered:2 secondorder:1 trick:1 dawid:1 expensive:1 jk:7 updating:1 rec:2 ep:2 observed:2 bottom:1 news:3 highest:1 substantial:1 trained:1 depend:2 predictive:4 reordered:1 upon:2 baseball:1 basis:1 joint:2 various:1 talk:1 train:1 london:2 kp:4 artificial:2 newman:1 larger:1 say:2 loglikelihood:2 otherwise:1 statistic:1 final:1 beal:3 advantage:3 ucl:2 propose:1 ste:1 uci:1 relevant:1 mixing:2 convergence:3 parent:2 cluster:3 optimum:8 diverges:1 p:1 tk:1 derive:2 depending:1 ac:3 fixing:1 ij:4 lauritzen:1 eq:2 strong:2 auxiliary:16 c:1 implemented:2 implies:1 drawback:1 tokyo:1 xid:8 summation:1 secondly:2 extension:2 ic:2 normal:1 exp:1 claim:1 major:1 achieves:1 released:1 purpose:1 estimation:1 applicable:3 label:2 tool:1 hope:1 always:1 occupied:1 factorizes:2 derived:4 focus:1 improvement:8 consistently:1 modelling:1 likelihood:12 bernoulli:1 digamma:3 sense:1 helpful:1 inference:20 dependent:1 membership:1 typically:1 hidden:1 issue:6 among:1 html:1 priori:1 raised:1 uc:1 marginal:5 genuine:1 equal:1 never:1 ng:1 sampling:12 represents:1 kw:15 future:1 report:1 escape:1 employ:1 composed:1 preserve:1 gamma:5 ve:1 replaced:2 consisting:2 maintain:1 interest:1 investigate:1 evaluation:2 mixture:5 chain:1 accurate:3 integral:1 experience:1 respective:1 taylor:1 initialized:4 instance:1 column:3 modeling:1 stirling:1 assignment:2 tractability:1 subset:2 successful:1 buntine:1 dependency:1 dir:5 gd:2 density:1 probabilistic:2 together:1 quickly:1 concrete:1 na:1 thesis:1 nm:2 choose:3 possibly:1 collapsing:2 worse:2 cognitive:1 expert:1 sidestep:1 leading:1 bx:1 american:1 toy:2 bold:1 caused:1 identifiability:1 accuracy:2 qk:3 who:2 variance:2 efficiently:1 dealt:2 modelled:3 bayesian:12 accurately:2 none:1 comp:1 modeller:1 reach:1 sharing:1 definition:1 against:1 petrov:1 minka:1 associated:1 mi:1 irvine:1 gain:1 dataset:3 knowledge:2 zid:19 variationally:1 sophisticated:3 actually:1 higher:2 restarts:1 improved:3 evaluated:2 though:2 xa:1 replacing:2 o:1 assessment:1 propagation:2 mode:1 lda:20 quality:1 believe:2 facilitate:1 effect:2 true:1 vicinity:1 iteratively:1 misc:2 semantic:1 deal:5 during:1 nuisance:1 maintained:1 m:1 criterion:2 whye:1 complete:1 performs:1 variational:56 multinomial:8 stepping:1 jp:1 volume:4 thirdly:1 discussed:1 association:1 interpret:1 significant:5 gibbs:12 cv:20 similarly:2 language:1 dot:1 access:1 base:1 posterior:19 recent:2 showed:2 apart:1 manipulation:1 certain:1 verlag:1 continue:1 vt:1 additional:1 care:1 somewhat:1 ndk:25 determine:1 period:1 ii:4 full:3 reduces:1 infer:1 technical:1 faster:1 ko:4 titech:1 expectation:13 iteration:1 represent:1 normalization:1 deterioration:1 addition:1 addressed:1 suffered:1 unlike:1 comment:1 lafferty:1 jordan:4 call:1 practitioner:1 ee:3 glda:2 noting:2 mw:1 split:1 easy:2 enough:1 iii:1 variety:2 newsgroups:2 restaurant:2 idea:1 regarding:1 cousin:1 politics:1 expression:1 pca:1 algebraic:1 generally:2 useful:1 detailed:2 amount:1 nonparametric:6 repeating:1 http:1 nsf:1 notice:1 estimated:1 neuroscience:1 hdps:3 track:2 per:1 klein:1 discrete:2 hyperparameter:2 shall:5 group:2 independency:1 four:1 kxtest:2 drawn:1 verified:1 kept:1 graph:1 year:2 sum:6 convert:1 run:1 letter:1 uncertainty:2 place:2 appendix:1 vb:9 bound:12 layer:2 tackled:1 display:1 encountered:1 annual:1 strength:1 adapted:1 software:2 ywteh:2 aspect:2 speed:1 expanded:1 kenichi:1 conjugate:1 smaller:1 slightly:1 making:1 s1:1 confound:1 computationally:1 remains:1 describing:1 count:3 needed:1 know:2 tractable:1 available:1 multiplied:1 apply:1 hierarchical:5 appropriate:1 alternative:3 top:2 dirichlet:28 include:2 denotes:1 remaining:1 graphical:3 marginalized:1 ghahramani:1 build:1 chinese:2 society:1 move:1 g0:3 quantity:4 concentration:1 dependence:4 usual:1 dp:7 thank:1 sci:1 topic:44 collected:1 reason:1 assuming:1 hdp:39 code:2 index:4 ratio:1 balance:2 thoughtful:1 equivalently:2 difficult:1 unfortunately:1 liang:1 sinica:1 teh:3 upper:1 observation:2 markov:1 datasets:1 finite:3 pnd:1 truncated:2 situation:1 vlassis:1 interacting:1 community:1 namely:1 required:1 kl:4 optimized:1 address:1 beyond:2 able:2 usually:1 below:2 max:1 treated:1 natural:1 scheme:2 technology:1 spiegelhalter:1 sethuraman:1 auto:1 coupled:1 prior:10 geometric:4 acknowledgement:1 marginalizing:2 fully:3 expect:4 interesting:2 allocation:4 proven:1 switched:1 integrate:2 exciting:1 row:3 token:2 repeat:1 last:1 truncation:5 supported:1 enjoys:1 side:1 bias:1 allow:1 institute:1 wide:4 taking:2 face:2 distributed:7 benefit:1 vocabulary:3 world:1 made:1 testset:1 far:3 welling:4 pruning:1 compact:1 approximate:1 countably:1 keep:2 global:1 corpus:4 assumed:4 unnecessary:1 search:1 latent:12 sk:1 table:2 ignoring:1 obtaining:1 init:6 expansion:1 did:2 main:1 statistica:1 reuters:4 hyperparameters:12 elaborate:1 gatsby:4 n:3 breaking:1 removing:1 shade:1 specific:1 explored:3 dk:8 list:1 multitude:1 evidence:1 intractable:1 adding:2 effectively:1 merging:2 pcfg:1 phd:1 conditioned:1 generalizing:1 simply:1 explore:1 antoniak:1 ez:2 failed:1 set2:1 sport:1 cowell:1 springer:1 sdk:10 nested:1 extracted:1 conditional:1 identity:1 ndkw:2 replace:1 hard:1 experimentally:1 typical:1 specifically:4 infinite:5 except:1 kurihara:3 averaging:1 sampler:1 distributionally:1 indicating:1 college:2 support:1 accelerated:1 constructive:2 dept:1 mcmc:1 avoiding:1 |
2,584 | 3,343 | Trans-dimensional MCMC for Bayesian Policy
Learning
Matt Hoffman
Dept. of Computer Science
University of British Columbia
hoffmanm@cs.ubc.ca
Arnaud Doucet
Depts. of Statistics and Computer Science
University of British Columbia
arnaud@cs.ubc.ca
Nando de Freitas
Dept. of Computer Science
University of British Columbia
nando@cs.ubc.ca
Ajay Jasra
Dept. of Mathematics
Imperial College London
ajay.jasra@imperial.ac.uk
Abstract
A recently proposed formulation of the stochastic planning and control problem
as one of parameter estimation for suitable artificial statistical models has led to
the adoption of inference algorithms for this notoriously hard problem. At the
algorithmic level, the focus has been on developing Expectation-Maximization
(EM) algorithms. In this paper, we begin by making the crucial observation that
the stochastic control problem can be reinterpreted as one of trans-dimensional
inference. With this new interpretation, we are able to propose a novel reversible
jump Markov chain Monte Carlo (MCMC) algorithm that is more efficient than
its EM counterparts. Moreover, it enables us to implement full Bayesian policy
search, without the need for gradients and with one single Markov chain. The
new approach involves sampling directly from a distribution that is proportional
to the reward and, consequently, performs better than classic simulations methods
in situations where the reward is a rare event.
1 Introduction
Continuous state-space Markov Decision Processes (MDPs) are notoriously difficult to solve. Except for a few rare cases, including linear Gaussian models with quadratic cost, there is no
closed-form solution and approximations are required [4]. A large number of methods have been
proposed in the literature relying on value function approximation and policy search; including
[3, 10, 14, 16, 18]. In this paper, we follow the policy learning approach because of its promise and
remarkable success in complex domains; see for example [13, 15]. Our work is strongly motivated
by a recent formulation of stochastic planning and control problems as inference problems. This line
of work appears to have been initiated in [5], where the authors used EM as an alternative to standard
stochastic gradient algorithms to maximize an expected cost. In [2], a planning problem under uncertainty was solved using a Viterbi algorithm. This was later extended in [21]. In these works, the
number of time steps to reach the goal was fixed and the plans were not optimal in expected reward.
An important step toward surmounting these limitations was taken in [20, 19]. In these works, the
standard discounted reward control problem was expressed in terms of an infinite mixture of MDPs.
To make the problem tractable, the authors proposed to truncate the infinite horizon time.
Here, we make the observation that, in this probabilistic interpretation of stochastic control, the
objective function can be written as the expectation of a positive function with respect to a transdimensional probability distribution, i.e. a probability distribution defined on a union of subspaces
1
of different dimensions. By reinterpreting this function as a (artificial) marginal likelihood, it is
easy to see that it can also be maximized using an EM-type algorithm in the spirit of [5]. However,
the observation that we are dealing with a trans-dimensional distribution enables us to go beyond
EM. We believe it creates many opportunities for exploiting a large body of sophisticated inference
algorithms in the decision-making context.
In this paper, we propose a full Bayesian policy search alternative to the EM algorithm. In this
approach, we set a prior distribution on the set of policy parameters and derive an artificial posterior
distribution which is proportional to the prior times the expected reward. In the simpler context
of myopic Bayesian experimental design, a similar method was developed in [11] and applied successfully to high-dimensional problems [12]. Our method can be interpreted as a trans-dimensional
extension of [11]. We sample from the resulting artificial posterior distribution using a single transdimensional MCMC algorithm, which only involves a simple modification of the MCMC algorithm
developed to implement the EM.
Although the Bayesian policy search approach can benefit from gradient information, it does not
require gradients. Moreover, since the target is proportional to the expected reward, the simulation
is guided to areas of high reward automatically. In the fixed policy case, the value function is often
computed using importance sampling. In this context, our algorithm could be reinterpreted as an
MCMC algorithm sampling from the optimal importance distribution.
2 Model formulation
We consider the following class of discrete-time Markov decision processes (MDPs):
X1 ? ?(?)
Xn | (Xn?1 = x, An?1 = a) ? fa ( ?| x)
Rn | (Xn = x, An = a) ? ga ( ?| x)
An | (Xn = x, ?) ? ?? ( ?| x) ,
(1)
where n = 1, 2, . . . is a discrete-time index, ?(?) is the initial state distribution, {Xn } is the
X ?valued state process, {An } is the A?valued action process, {Rn } is a positive real-valued reward process, fa denotes the transition density, ga the reward density and ?? is a randomized policy.
If we have a deterministic policy then ?? ( a| x) = ??? (x) (a). In this case, the transition model
fa ( ?| x) assumes the parametrization f? ( ?| x). The reward model could also be parameterized as
g? ( ?| x). It should be noted that for this work we will be working within a model-based framework
and as a result will require knowledge of the transition model (although it could be learned).
We are here interested in maximizing with respect to the parameters of the policy ? the expected
future reward
"?
#
X
?
n?1
V? (?) = E
?
Rn ,
n=1
where 0 < ? < 1 is a discount factor and the expectation is with respect to the probabilistic model
defined in (1). As shown in [20], it is possible to re-write this objective of optimizing an infinite
horizon discounted reward MDP (where the reward happens at each step) as one of optimizing an
infinite mixture of finite horizon MDPs (where the reward only happens at the last time step).
]
In particular, we note that by introducing the trans-dimensional probability distribution on
{k} ?
k
k
+
X ? A ? R given by
p? (k, x1:k , a1:k , rk ) = (1 ? ?) ?
k?1
? (x1 ) gak ( rk | xk )
k
Y
fan?1 ( xn | xn?1 )
n=2
k
Y
?? ( an | xn ) ,
n=1
(2)
we can easily rewrite V?? (?) as an infinite mixture model of finite horizon MDPs, with the reward
happening at the last horizon step; namely at k. Specifically we have:
? Z
X
?1
V?? (?) = (1 ? ?) Ep? [RK ] = (1 ? ?)?1
rk p? (k, x1:k , a1:k , rk ) dx1:k da1:k drk
(3)
k=1
2
for a randomized policy. Similarly, for a deterministic policy,
] the representation (3) also holds for
the trans-dimensional probability distribution defined on
{k} ? X k ? R+ given by
p? (k, x1:k , rk ) = (1 ? ?) ?
k?1
? (x1 ) g? ( rk | xk )
k
Y
f? ( xn | xn?1 ) .
(4)
n=2
The representation (3) was also used in [6] to compute the value function through MCMC for a
fixed ?. In [20], this representation is exploited to maximize V?? (?) using the EM algorithm which,
applied to this problem, proceeds as follows at iteration i
?i = arg max Q (?i?1 , ?)
???
where
Q (?i?1 , ?) = Epe?i?1 [log (RK .p? (K, X1:K , A1:K , RK ))] ,
rk p? (k, x1:k , a1:k , rk )
.
Ep? [RK ]
Unlike [20], we are interested in problems with potentially nonlinear and non-Gaussian properties.
In these situations, the Q function cannot be calculated exactly. The standard Monte Carlo EM
approach consists of sampling from pe? (k, x1:k , a1:k , rk ) using MCMC to obtain a Monte Carlo estimate of the Q function. As pe? (k, x1:k , a1:k , rk ) is proportional to the reward, the samples will consequently be drawn in regions of high reward. This is a particularly interesting feature in situations
where the reward function is concentrated in a region of low probability mass under p? (k, x1:k , rk ),
which is often the case in high-dimensional control settings. Note that if we wanted to estimate
V?? (?) using importance sampling, then the distribution pe? (k, x1:k , a1:k , rk ) corresponds to the optimal zero-variance importance distribution.
pe? (k, x1:k , a1:k , rk ) =
Alternatively, instead of sampling from pe? (k, x1:k , a1:k , rk ) using MCMC, we could proceed as
in [20] to derive forward-backward algorithms to implement the E-step which can be implemented
here using Sequential Monte Carlo (SMC) techniques. We have in fact done this using the smoothing
algorithms proposed in [9]. However, we will focus the discussion on a different MCMC approach
based on trans-dimensional simulation. As shown in the experiments, the latter does considerably
better.
Finally, we remark that for a deterministic policy, we can introduce the trans-dimensional distribution:
rk p? (k, x1:k , rk )
pe? (k, x1:k , rk ) =
Ep? [RK ] .
In addition, and for ease of presentation only, we focus the discussion on deterministic policies and
reward functions g? ( rn | xn ) = ?r(xn ) (rn ) ; the extension of our algorithms to the randomized case
is straightforward.
3 Bayesian policy exploration
The EM algorithm is particularly sensitive to initialization and might get trapped in a severe local maximum of V?? (?). Moreover, in the general state-space setting that we are considering, the
particle smoothers in the E-step can be very expensive computationally.
To address these concerns, we propose an alternative full Bayesian approach. In the simpler context
of experimental design, this approach was successfully developed in [11], [12]. The idea consists
of introducing a vague prior distribution p (?) on the parameters
of the policy ?. We then define the
]
new artificial probability distribution defined on ? ? {k} ? X k by
p (?, k, x1:k ) ? r (xk ) p? (k, x1:k ) p (?) .
By construction, this target distribution admits the following marginal in ?
p (?) ? V?? (?) p (?)
and we can select an improper prior distribution p (?) ? 1 if
3
R
?
V?? (?) d? < ?.
If we could sample from p (?), then the generated samples ?(i) would concentrate themselves
in regions where V?? (?) is large. We cannot sample from p (?) directly but we can developed a
trans-dimensional MCMC algorithm which will generate asymptotically samples from p (?, k, x1:k ),
hence samples from p (?).
Our algorithm proceeds as follows. Assume the current state of the Markov chain targeting
p (?, k, x1:k ) is (?, k, x1:k ). We propose first to update the components (k, x1:k ) conditional upon ?
using a combination of birth, death and update moves using the reversible jump MCMC algorithm
[7, 8, 17]. Then we propose to update ? conditional upon the current value of (k, x1:k ). This can
be achieved using a simple Metropolis-Hastings algorithm or a more sophisticated dynamic Monte
Carlo schemes. For example, if gradient information is available, one could adopt Langevin diffusions and the hybrid Monte Carlo algorithm [1]. The overall algorithm is depicted in Figure 1. The
details of the reversible jump algorithm are presented in the following section.
(0)
1. Initialization: set (k(0) , x1:k(0) , ?(0) ).
2. For i = 0 to N ? 1
? Sample u ? U[0,1] .
? If (u ? bk )
? then carry out a ?birth? move: Increase the horizon length of the MDP, say
k(i) = k(i?1) + 1 and insert a new state.
? else if (u ? bk + dk ) then carry out a ?death? move: decrease the horizon
length of the MDP, say k(i) = k(i?1) ? 1 and an existing state.
(i)
? else let k(i) = k(i?1) and generate samples x1:k(i) of the MDP states.
End If.
(i)
? Sample the policy parameters ?(i) conditional on the samples (x1:k(i) , k(i) ).
Figure 1: Generic reversible jump MCMC for Bayesian policy learning.
We note that for a given ? the samples of the states and horizon generated by this Markov chain
will also be distributed (asymptotically) according to the trans-dimensional distribution pe? (k, x1:k ).
Hence, they can be easily adapted to generate a Monte Carlo estimate of Q (?i?1 , ?). This allows
us to side-step the need for expensive smoothing algorithms in the E-step. The trans-dimensional
simulation approach has the advantage that the samples will concentrate themselves automatically
in regions where pe? (k) has high probability masses. Moreover, unlike in the EM framework, it is
no longer necessary to truncate the time domain.
4 Trans-Dimensional Markov chain Monte Carlo
We present a simple reversible jump method composed of two reversible moves (birth and death)
and several update moves. Assume the current state of the Markov chain targeting pe? (k, x1:k )
is (k, x1:k ). With probability1 bk , we propose a birth move; that is we sample a location
uniformly in the interval {1, ..., k + 1}, i.e. J ? U {1, ..., k + 1}, and propose the candidate
(k + 1, x1:j?1 , x? , xj:k ) where X ? ? q? ( ?| xj?1:j ). This candidate is accepted with probability
Abirth = min{1, ?birth } where we have for j ? {2, ..., k ? 1}
pe? (k + 1, x1:j?1 , x? , xj:k ) dk+1
?birth =
pe? (k, x1:k ) bk q? ( x? | xj?1:j )
?f? ( x? | xj?1 ) f? ( xj | x? ) dk+1
=
,
f? ( xj | xj?1 ) bk q? ( x? | xj?1:j )
for j = 1
?? (x? ) f? ( x1 | x? ) dk+1
?birth =
,
? (x1 ) bk q? ( x? | x1 )
1
In practice we can set the birth and death probabilities such that bk = dk = uk = 1/3.
4
and j = k + 1
?r (x? ) f? ( x? | xk ) dk+1
.
r (xk ) bk q? ( x? | xk )
With probability dk , we propose a death move; that is J ? U {1, ..., k} and we propose the candidate
(k ? 1, x1:j?1 , xj+1:k ) which is accepted with probability Adeath = min{1, ?death } where for
j ? {2, ..., k ? 1}
?birth =
pe? (k ? 1, x1:j?1 , xj+1:k ) bk+1 q? ( xj | xj?1:j+1 )
pe? (k, x1:k ) dk
f? ( xj+1 | xj?1 ) bk+1 q? ( xj | xj?1:j+1 )
=
,
?f? ( xj+1 | xj ) f? ( xj | xj?1 ) dk
?death =
for j = 1
?death =
? (x2 ) q? ( x1 | x2 ) bk+1
,
?? (x1 ) f? ( x2 | x1 ) dk
and for j = k
r (xk?1 ) q? ( xk | xk?1 ) bk+1
.
?r (xk ) f? ( xk | xk?1 ) dk
The ?birth and ?death terms derived above can be thought of as ratios between the distribution over
the newly proposed state of the chain (i.e. after the birth/death) and the current state. These terms
must also ensure reversibility and the dimension-matching requirement for reversible jump MCMC.
For more information see [7, 8].
?death =
Finally with probability uk = 1 ? bk ? dk , we propose a standard (fixed dimensional) move where
we update all or a subset of the components x1:k using say Metropolis-Hastings or Gibbs moves.
There are many design possibilities for these moves. In general, one should block some of the
variables so as to improve the mixing time of the Markov chain. If one adopts a simple one-at-a
time Metropolis-Hastings scheme with proposals q? ( x? | xj?1:j+1 ) to update the j-th term, then the
candidate is accepted with probability Aupdate = min{1, ?update } where for j ? {2, ..., k ? 1}
pe? (k, x1:j?1 , x? , xj+1:k ) q? ( xj | xj?1 , x? , xj+1 )
pe? (k, x1:k ) q? ( x? | xj?1:j+1 )
f? ( x? | xj?1 ) f? ( xj+1 | x? ) q? ( xj | xj?1 , x? , xj+1 )
=
,
f? ( xj | xj?1 ) f? ( xj+1 | xj ) q? ( x? | xj?1:j+1 )
?update =
for j = 1
?update =
? (x? ) f? ( x2 | x? ) q? ( x1 | x? , x2 )
,
? (x1 ) f? ( x2 | x1 ) q? ( x? | x1:2 )
and for j = k
?update =
r (x? ) f? ( x? | xk?1 ) q? ( xk | x? , xk?1 )
.
r (xk ) f? ( xk | xk?1 ) q? ( x? | xk?1:k )
(i)
Under weak assumptions on the model, the Markov chain {K (i) , X1:K } generated by this transition
kernel will be irreducible and aperiodic and hence will generate asymptotically samples from the
target distribution pe? (k, x1:k ).
We emphasize that the structure of the distributions pe? ( x1:k | k) will not in many applications vary
significantly with k and we often have pe? ( x1:k | k) ? pe? ( x1:k | k + 1). Hence the probability of having the reversible moves accepted will be reasonable. Standard Bayesian applications of reversible
jump MCMC usually do not enjoy this property and it makes it more difficult to design fast mixing
algorithms. In this respect, our problem is easier.
5 Experiments
It should be noted from the outset that the results presented in this paper are preliminary, and serve
mainly as an illustration of the Monte Carlo algorithms presented earlier. With that note aside, even
these simple examples will give us some intuition about the algorithms? performance and behavior.
5
1.4
1.2
1
0.8
0.6
0.4
0.2
0
?0.2
?0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Figure 2: This figure shows an illustration of the 2d state-space described in section 5. Ten sample points
are shown distributed according to ?, the initial distribution, and the contour plot corresponds to the reward
function r. The red line denotes the policy parameterized by some angle ?, while a path is drawn in blue
sampled from this policy.
We are also very optimistic as to the possible applications of analytic expressions for linear Gaussian
models, but space has not allowed us to present simulations for this class of models here.
We will consider state- and action-spaces X = A = R2 such that each state x ? X is a 2d position
and each action a ? A is a vector corresponding to a change in position. A new state at time n
is given by Xn = Xn?1 + An?1 + ?n?1 where ?n?1 denotes zero-mean Gaussian noise. Finally
we will let ? be a normal distribution about the origin, and consider a reward (as in [20]) given by
an unnormalized Gaussian about some point m, i.e. r(x) = exp(? 21 (x ? m)T ??1 (x ? m)). An
illustration of this space can be seen in Figure 2 where m = (1, 1).
For these experiments we chose a simple, stochastic policy parameterized by ? ? [0, 2?]. Under
this policy, an action An = (w + ?) ? (cos(? + ?), sin(? + ?)) is taken where ? and ? are normally
distributed random variables and w is some (small) constant step-length. Intuitively, this policy corresponds to choosing a direction ? in which the agent will walk. While unrealistic from a real-world
perspective, this allows us a method to easily evaluate and plot the convergence of our algorithm.
For a state-space with initial distribution and reward function defined as in Figure 2 the optimal
policy corresponds to ? = ?/4.
We first implemented a simple SMC-based extension of the EM algorithm described in [20], wherein
a particle filter was used for the forwards/backwards filters. The plots in Figure 3 compare the
SMC-based and trans-dimensional approaches performing on this synthetic example. Here the inferred value of ? is shown against CPU time, averaged over 5 runs. The first thing of note is the
terrible performance of the SMC-based algorithm?in fact we had to make the reward broader and
closer to the initial position in order to ensure that the algorithm converges in a reasonable amount
2
of time. This comes as no surprise considering the O(N 2 kmax
) time complexity necessary for computing the importance weights. While there do exist methods [9] for reducing this complexity to
2
O(N log N kmax
), the discrepancy between this and the reversible jump MCMC method suggests
that the MCMC approach may be more adapted to this class of problems. In the finite/discrete case
2
it is also possible, as shown by Toussaint et al (2006), to reduce the kmax
term to kmax by calculating
updates only using messages from the backwards recursion. The SMC method might further be improved by better choices for the artificial distribution ?n (xn ) in the backwards filter. In this problem
we used a vague Gaussian centered on the relevant state-space. It is however possible that any added
benefit from a more informative ? distribution is counterbalanced by the time required to calculate
this ?, for example by simulating particles forward in order to find the invariant distribution, etc.
Also shown in figure 3 is the performance of a Monte Carlo EM algorithm using reversible jump
MCMC in the E-step. Both this and the fully Bayesian approach perform comparably, although the
fully Bayesian approach shows less in-run variance, as well as less variance between runs. The EM
algorithm was also more sensitive, and we were forced to increase the number of samples N used
6
Convergence of ? as a function of time
Convergence of ? as a function of time
1.5
Two?filter EM
Monte Carlo EM
Bayes. policy search
Optimal (baseline)
1.4
0.85
1.2
? (in radians)
? (in radians)
1.3
Monte Carlo EM
Bayes. policy search
Optimal (baseline)
1.1
1
0.9
0.8
0.75
0.8
0.7
0
500
1000
1500
2000
2500
cpu time (in seconds)
0.7
0
200
400
600
800
1000
cpu time (in seconds)
Figure 3: The left figure shows estimates for the policy parameter ? as a function of the CPU time used to
calculate that value. This data is shown for the three discussed Monte Carlo algorithms as applied to a synthetic
example and has been averaged over five runs; error bars are shown for the SMC-based EM algorithm. Because
of the poor performance of the SMC-based algorithm it is difficult to compare the performance of the other two
algorithms using only this plot. The right figure shows a smoothed and ?zoomed? version of the right plot in
order to show the reversible-jump EM algorithm and the fully Bayesian algorithm in more detail. In both plots
a red line denotes the known optimal policy parameter of ?/4.
by the E-step as the algorithm progressed, as well as controlling the learning rate with a smoothing
parameter. For higher dimensional and/or larger models it is not inconceivable that this could have
an adverse affect on the algorithms performance.
Finally, we also compared the proposed Bayesian policy exploration method to the PEGASUS [14]
approach using a local search method. We initially tried using a policy-gradient approach, but
because of the very highly-peaked rewards the gradients become very poorly scaled and would have
required more tuning. As shown in Figure 4, the Bayesian strategy is more efficient in this rare
event setting. As the dimension of the state-space increases, we expect this difference to become
even more pronounced.
6 Discussion
We believe that formulating stochastic control as a trans-dimensional inference problem is fruitful.
This formulation relies on minimal assumptions and allows us to apply modern inference algorithms
to solve control problems. We have focused here on Monte Carlo methods and have presented?
to the best of our knowledge?the first application of reversible jump MCMC to policy search.
Our results, on an illustrative example, showed that this trans-dimensional MCMC algorithm is
more effective that standard policy search methods and alternative Monte Carlo methods relying on
particle filters. However, this methodology remains to be tested on high-dimensional problems. For
such scenarios, we expect that it will be necessary to develop more efficient MCMC strategies to
explore the policy space efficiently.
References
[1] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. An introduction to MCMC for machine learning.
Machine Learning, 50:5?43, 2003.
[2] H. Attias. Planning by probabilistic inference. In Uncertainty in Artificial Intelligence, 2003.
[3] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence
Research, 15:319?350, 2001.
[4] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995.
[5] P. Dayan and G. E. Hinton. Using EM for reinforcement learning. Neural Computation, 9:271?278, 1997.
7
Evolution of policy parameters against transition-model samples
policy parameter (theta)
0.8
0.7
rjmdp
pegasus
optimal
0.6
0.5
0.4
0.3
0.20
1000
2000 3000 4000 5000 6000 7000 8000
number of samples taken from transition-model
9000
Figure 4: Convergence of PEGASUS and our Bayesian policy search algorithm when started from ? = 0
and converging to the optimum of ?? = ?/4. The plots are averaged over 10 runs. For our algorithm we
plot samples taken directly from the MCMC algorithm itself: plotting the empirical average would produce an
estimate whose convergence is almost immediate, but we also wanted to show the ?burn-in? period. For both
algorithms lines denoting one standard deviation are shown and performance is plotted against the number of
samples taken from the transition model.
[6] A. Doucet and V. B. Tadic. On solving integral equations using Markov chain Monte Carlo methods.
Technical Report CUED-F-INFENG 444, Cambridge University Engineering Department, 2004.
[7] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination.
Biometrika, 82:711?732, 1995.
[8] P. J. Green. Trans-dimensional Markov chain Monte Carlo. In Highly Structured Stochastic Systems,
2003.
[9] M. Klaas, M. Briers, N. de Freitas, A. Doucet, and S. Maskell. Fast particle smoothing: If i had a million
particles. In International Conference on Machine Learning, 2006.
[10] G. Lawrence, N. Cowan, and S. Russell. Efficient gradient estimation for motor control learning. In
Uncertainty in Artificial Intelligence, pages 354?36, 2003.
[11] P. M?uller. Simulation based optimal design. Bayesian Statistics, 6, 1999.
[12] P. M?uller, B. Sans?o, and M. De Iorio. Optimal Bayesian design by inhomogeneous Markov chain simulation. J. American Stat. Assoc., 99:788?798, 2004.
[13] A. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Inverted autonomous
helicopter flight via reinforcement learning. In International Symposium on Experimental Robotics, 2004.
[14] A. Y. Ng and M. I. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In
Uncertainty in Artificial Intelligence, 2000.
[15] J. Peters and S. Schaal. Policy gradient methods for robotics. In IEEE International Conference on
Intelligent Robotics Systems, 2006.
[16] M. Porta, N. Vlassis, M. T. J. Spaan, and P. Poupart. Point-based value iteration for continuous POMDPs.
Journal of Machine Learning Research, 7:2329?2367, 2006.
[17] S. Richardson and P. J. Green. On Bayesian analysis of mixtures with an unknown number of components.
Journal of the Royal Statistical Society B, 59(4):731?792, 1997.
[18] S. Thrun. Monte Carlo POMDPs. In S. Solla, T. Leen, and K.-R. M?uller, editors, Neural Information
Processing Systems, pages 1064?1070. MIT Press, 2000.
[19] M. Toussaint, S. Harmeling, and A. Storkey. Probabilistic inference for solving (PO)MDPs. Technical
Report EDI-INF-RR-0934, University of Edinburgh, School of Informatics, 2006.
[20] M. Toussaint and A. Storkey. Probabilistic inference for solving discrete and continuous state Markov
decision processes. In International Conference on Machine Learning, 2006.
[21] D. Verma and R. P. N. Rao. Planning and acting in uncertain environments using probabilistic inference.
In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2006.
8
| 3343 |@word version:1 simulation:7 tried:1 carry:2 initial:4 denoting:1 freitas:3 existing:1 current:4 written:1 must:1 porta:1 informative:1 klaas:1 enables:2 wanted:2 analytic:1 plot:8 motor:1 update:11 aside:1 intelligence:4 xk:19 parametrization:1 location:1 simpler:2 five:1 become:2 symposium:1 consists:2 reinterpreting:1 introduce:1 expected:5 behavior:1 themselves:2 planning:5 brier:1 relying:2 discounted:2 automatically:2 cpu:4 considering:2 begin:1 moreover:4 mass:2 interpreted:1 developed:4 exactly:1 biometrika:1 scaled:1 assoc:1 uk:3 control:10 normally:1 enjoy:1 bertsekas:1 positive:2 engineering:1 local:2 initiated:1 path:1 might:2 chose:1 burn:1 initialization:2 suggests:1 co:1 ease:1 smc:7 adoption:1 averaged:3 harmeling:1 union:1 practice:1 implement:3 block:1 area:1 empirical:1 thought:1 significantly:1 matching:1 outset:1 get:1 cannot:2 ga:2 targeting:2 context:4 kmax:4 fruitful:1 deterministic:4 maximizing:1 go:1 straightforward:1 focused:1 classic:1 autonomous:1 target:3 construction:1 controlling:1 programming:1 origin:1 probability1:1 storkey:2 expensive:2 particularly:2 ep:3 solved:1 calculate:2 region:4 improper:1 solla:1 decrease:1 russell:1 intuition:1 environment:1 complexity:2 reward:24 dynamic:2 rewrite:1 solving:3 serve:1 creates:1 upon:2 maskell:1 vague:2 iorio:1 easily:3 po:1 epe:1 forced:1 fast:2 effective:1 london:1 monte:19 artificial:10 choosing:1 birth:11 whose:1 larger:1 solve:2 valued:3 say:3 statistic:2 richardson:1 itself:1 advantage:1 rr:1 propose:10 zoomed:1 helicopter:1 relevant:1 transdimensional:2 mixing:2 poorly:1 pronounced:1 exploiting:1 convergence:5 requirement:1 optimum:1 produce:1 converges:1 cued:1 derive:2 develop:1 ac:1 stat:1 school:1 implemented:2 c:3 involves:2 come:1 concentrate:2 guided:1 direction:1 aperiodic:1 inhomogeneous:1 filter:5 stochastic:8 exploration:2 nando:2 centered:1 require:2 preliminary:1 extension:3 insert:1 sans:1 hold:1 normal:1 exp:1 lawrence:1 algorithmic:1 viterbi:1 vary:1 adopt:1 estimation:3 sensitive:2 successfully:2 hoffman:1 uller:3 mit:1 gaussian:6 broader:1 derived:1 focus:3 schaal:1 likelihood:1 mainly:1 baseline:2 inference:10 dayan:1 initially:1 interested:2 arg:1 overall:1 plan:1 smoothing:4 marginal:2 schulte:1 having:1 reversibility:1 sampling:6 ng:2 progressed:1 peaked:1 future:1 discrepancy:1 report:2 intelligent:2 few:1 irreducible:1 modern:1 composed:1 message:1 possibility:1 highly:2 reinterpreted:2 severe:1 mixture:4 myopic:1 chain:13 integral:1 closer:1 necessary:3 walk:1 re:1 plotted:1 minimal:1 uncertain:1 tse:1 earlier:1 rao:1 maximization:1 cost:2 introducing:2 deviation:1 subset:1 rare:3 considerably:1 synthetic:2 drk:1 density:2 international:4 randomized:3 probabilistic:6 informatics:1 conf:1 american:1 ganapathi:1 de:4 int:1 later:1 closed:1 optimistic:1 red:2 bayes:2 variance:3 efficiently:1 maximized:1 weak:1 bayesian:19 comparably:1 carlo:19 notoriously:2 pomdps:3 reach:1 against:3 radian:2 sampled:1 newly:1 knowledge:2 sophisticated:2 appears:1 higher:1 follow:1 methodology:1 wherein:1 improved:1 formulation:4 done:1 leen:1 strongly:1 working:1 hastings:3 flight:1 nonlinear:1 reversible:14 scientific:1 believe:2 mdp:4 diel:1 matt:1 counterpart:1 andrieu:1 evolution:1 hence:4 arnaud:2 death:11 sin:1 noted:2 illustrative:1 unnormalized:1 performs:1 novel:1 recently:1 million:1 discussed:1 interpretation:2 cambridge:1 gibbs:1 tuning:1 mathematics:1 similarly:1 particle:6 had:2 robot:1 longer:1 etc:1 posterior:2 recent:1 showed:1 perspective:1 optimizing:2 inf:1 scenario:1 success:1 exploited:1 inverted:1 seen:1 maximize:2 period:1 smoother:1 full:3 technical:2 determination:1 dept:3 a1:9 converging:1 infeng:1 ajay:2 expectation:3 iteration:2 kernel:1 achieved:1 robotics:3 proposal:1 addition:1 interval:1 else:2 crucial:1 unlike:2 thing:1 cowan:1 spirit:1 jordan:2 backwards:3 easy:1 baxter:1 xj:37 affect:1 counterbalanced:1 reduce:1 idea:1 attias:1 motivated:1 expression:1 bartlett:1 peter:1 proceed:1 action:4 remark:1 amount:1 discount:1 ten:1 concentrated:1 generate:4 terrible:1 exist:1 coates:1 trapped:1 blue:1 discrete:4 write:1 promise:1 imperial:2 drawn:2 diffusion:1 backward:1 asymptotically:3 run:5 angle:1 parameterized:3 uncertainty:4 almost:1 reasonable:2 decision:4 fan:1 quadratic:1 adapted:2 x2:6 min:3 formulating:1 performing:1 jasra:2 department:1 developing:1 according:2 structured:1 truncate:2 combination:1 poor:1 em:20 spaan:1 metropolis:3 making:2 modification:1 happens:2 intuitively:1 invariant:1 taken:5 computationally:1 equation:1 remains:1 tractable:1 end:1 available:1 apply:1 generic:1 simulating:1 alternative:4 assumes:1 denotes:4 ensure:2 opportunity:1 calculating:1 inconceivable:1 society:1 rsj:1 objective:2 move:11 added:1 pegasus:4 fa:3 strategy:2 gradient:10 subspace:1 thrun:1 gak:1 athena:1 poupart:1 toward:1 length:3 index:1 berger:1 illustration:3 ratio:1 tadic:1 liang:1 difficult:3 potentially:1 design:6 policy:40 unknown:1 perform:1 observation:3 markov:15 finite:3 immediate:1 situation:3 extended:1 langevin:1 hinton:1 vlassis:1 rn:5 smoothed:1 inferred:1 bk:13 edi:1 namely:1 required:3 learned:1 trans:16 address:1 able:1 beyond:1 proceeds:2 usually:1 bar:1 including:2 max:1 green:3 royal:1 unrealistic:1 suitable:1 event:2 hybrid:1 recursion:1 scheme:2 improve:1 mdps:7 theta:1 started:1 columbia:3 prior:4 literature:1 fully:3 expect:2 interesting:1 limitation:1 proportional:4 remarkable:1 toussaint:3 agent:1 plotting:1 editor:1 verma:1 last:2 side:1 benefit:2 distributed:3 edinburgh:1 dimension:3 xn:15 transition:7 calculated:1 contour:1 world:1 author:2 forward:3 jump:12 adopts:1 reinforcement:2 emphasize:1 dealing:1 doucet:4 alternatively:1 search:11 continuous:3 ca:3 complex:1 domain:2 noise:1 allowed:1 body:1 x1:53 position:3 candidate:4 pe:19 british:3 rk:22 dx1:1 r2:1 dk:12 admits:1 concern:1 sequential:1 importance:5 horizon:9 depts:1 easier:1 surprise:1 da1:1 depicted:1 led:1 explore:1 happening:1 expressed:1 ubc:3 corresponds:4 relies:1 conditional:3 goal:1 presentation:1 consequently:2 hard:1 change:1 adverse:1 infinite:6 except:1 specifically:1 uniformly:1 reducing:1 acting:1 accepted:4 experimental:3 select:1 college:1 surmounting:1 latter:1 evaluate:1 mcmc:22 tested:1 |
2,585 | 3,344 | Nearest-Neighbor-Based Active Learning for Rare
Category Detection
Jingrui He
School of Computer Science
Carnegie Mellon University
jingruih@cs.cmu.edu
Jaime Carbonell
School of Computer Science
Carnegie Mellon University
jgc@cs.cmu.edu
Abstract
Rare category detection is an open challenge for active learning, especially in
the de-novo case (no labeled examples), but of significant practical importance for
data mining - e.g. detecting new financial transaction fraud patterns, where normal
legitimate transactions dominate. This paper develops a new method for detecting
an instance of each minority class via an unsupervised local-density-differential
sampling strategy. Essentially a variable-scale nearest neighbor process is used to
optimize the probability of sampling tightly-grouped minority classes, subject to
a local smoothness assumption of the majority class. Results on both synthetic
and real data sets are very positive, detecting each minority class with only a fraction of the actively sampled points required by random sampling and by Pelleg?s
Interleave method, the prior best technique in the sparse literature on this topic.
1
Introduction
In many real world problems, the proportion of data points in different classes is highly skewed:
some classes dominate the data set (majority classes), and the remaining classes may have only a
few examples (minority classes). However, it is very important to detect examples from the minority
classes via active learning. For example, in fraud detection tasks, most of the records correspond to
normal transactions, and yet once we identify a new type of fraud transaction, we are well on our
way to stopping similar future fraud transactions [2]. Another example is in astronomy. Most of
the objects in sky survey images are explainable by current theories and models. Only 0.001% of
the objects are truly beyond the scope of current science and may lead to new discoveries [8]. Rare
category detection is also a bottleneck in reducing the sampling complexity of active learning [1,
5]. The difference between rare category detection and outlier detection is that: in rare category
detection, the examples from one or more minority classes are often self-similar, potentially forming
compact clusters, while in outlier detection, the outliers are typically scattered.
Currently, only a few methods have been proposed to address this challenge. For example, in [8],
the authors assumed a mixture model to fit the data, and selected examples for labeling according
to different criteria; in [6], the authors proposed a generic consistency algorithm, and proved upper
bounds and lower bounds for this algorithm in some specific situations. Most of the existing methods
require that the majority classes and the minority classes be separable or work best in the separable
case. However, in real applications, the support regions of the majority and minority classes often
overlap, which affects negatively the performance of these methods.
In this paper, we propose a novel method for rare category detection in the context of active learning.
We typically start de-novo, no category labels, though our algorithm makes no such assumption.
Different from existing methods, we aim to solve the hard case, i.e. we do not assume separability or
near-separability of the classes. Intuitively, the method makes use of nearest neighbors to measure
local density around each example. In each iteration, the algorithm selects an example with the
1
maximum change in local density on a certain scale, and asks the oracle for its label. The method
stops once it has found at least one example from each class (given the knowledge of the number
of classes). When the minority classes form compact clusters and the majority class distribution
is locally smooth, the method will select examples both on the boundary and in the interior of the
minority classes, and is proved to be effective theoretically. Experimental results on both synthetic
and real data sets show the superiority of our method over existing methods.
The rest of the paper is organized as follows. In Section 2, we introduce our method and provide
theoretical justification, first for binary classes and then for multiple classes. Section 3 gives experimental results. Finally, we conclude the paper in Section 4.
2
2.1
Rare category detection
Problem definition
Given a set of unlabeled examples S = {x1 , . . . , xn }, xi ? Rd , which come from m distinct classes,
i.e. yi ? {1, . . . , m}, the goal is to find at least one example from each class by requesting as few
total labels as possible. For the sake of simplicity, assume that there is only one majority class,
which corresponds to yi = 1, and all the other classes are minority classes.
2.2
Rare category detection for the binary case
First let us focus on the simplest case where m = 2, and Pr[yi = 1] ? Pr[yi = 2] = p, i.e.
p ? 1. Here, we assume that we have an estimate of the value of p a priori. Next, we introduce our
method for rare category detection based on nearest neighbors, which is presented in Algorithm 1.
The basic idea is to find maximum changes in local density, which might indicate the location of a
rare category.
The algorithm works as follows. Given the unlabeled set S and the prior of the minority class p, we
first estimate the number K of minority class examples in S. Then, for each example, we record
its distance from the K th nearest neighbor, which could be realized by kd-trees [7]. The minimum
distance over all the examples is assigned to r0 . Next, we draw a hyper-ball centered at each example
with radius r0 , and count the number of examples enclosed by this hyper-ball, which is denoted as
ni . ni is roughly in proportion to the local density. To measure the change of local density around a
certain point xi , in each iteration of Step 3, we subtract nj of neighboring points from ni , and let the
maximum value be the score of xi . The example with the maximum score is selected for labeling
by the oracle. If the example is from the minority class, stop the iteration; otherwise, enlarge the
neighborhood where the scores of the examples are re-calculated and continue.
Before giving theoretical justification, here, we give an intuitive explanation of why the algorithm
works. Assume that the minority class is concentrated in a small region and the probability distribution function (pdf) of the majority class is locally smooth. Firstly, since the support region of the
minority class is very small, it is important to find its scale. The r0 value obtained in Step 1 will
be used to calculate the local density ni . Since r0 is based on the minimum K th nearest neighbor
distance, it is never too large to smooth out changes of local density, and thus it is a good measure of
the scale. Secondly, the score of a certain point, corresponding to the change in local density, is the
maximum of the difference in local density between this point and all of its neighboring points. In
this way, we are not only able to select points on the boundary of the minority class, but also points
in the interior, given that the region is small. Finally, by gradually enlarging the neighborhood where
the scores are calculated, we can further explore the interior of the support region, and increase our
chance of finding a minority class example.
2.3
Correctness
In this subsection, we prove that if the minority class is concentrated in a small region and the pdf
of the majority class is locally smooth, the proposed algorithm will repeatedly sample in the region
where minority class examples occur with high probability.
Let f1 (x) and f2 (x) denote the pdf of the majority and minority classes respectively, where x ? Rd .
To be precise, we make the following assumptions.
2
Algorithm 1 Nearest-Neighbor-Based Rare Category Detection for the Binary Case (NNDB)
Require: S, p
1: Let K = np. For each example, calculate the distance to its K th nearest neighbor. Set r 0 to be
the minimum value among all the examples.
2: ?xi ? S, let N N (xi , r 0 ) = {x|x ? S, kx ? xi k ? r 0 }, and ni = |N N (xi , r 0 )|.
3: for t = 1 : n do
4:
?xi ? S, if xi has not been selected, then si =
max 0 (ni ? nj ); otherwise, si = ??.
xj ?N N (xi ,tr )
5:
Query x = arg maxxi ?S si .
6:
If the label of x is 2, break.
7: end for
Assumptions
1. f2 (x) is uniform within a hyper-ball B of radius r centered at b, i.e. f2 (x) = V 1(r) , if
x ? B; and 0 otherwise, where V (r) ? rd is the volume of B.
c1 p
2. f1 (x) is bounded and positive in B 1 , i.e. f1 (x) ? (1?p)V
(r) , ?x ? B and f1 (x) ?
c2 p
d
(1?p)V (r) , ?x ? R , where c1 , c2 > 0 are two constants.
With the above assumptions, we have the following claim and theorem. Note that variants of the
following proof apply if we assume a different minority class distribution, such as a tight Gaussian.
Claim 1. ??, ? > 0, if n ? max{ 2c21p2 log 3? , 2(1?21?d )2 p2 log 3? , ?4 V (1r2 )4 log 3? }, where r2 =
1
2
r2
r2
r
1 , and V ( 2 ) is the volume of a hyper-ball with radius 2 , then with probability at least 1 ? ?,
(1+c2 ) d
r2
0
2 ? r
? r and | nni ? E( nni )| ? ?V (r0 ), 1 ? i ? n, where V (r0 ) is the volume of a hyper-ball
with radius r0 .
Proof. First, notice that the expected proportion of points falling inside B, E( |N Nn(b,r)| ) ? (c1 +1)p,
and that the maximum expected proportion of points falling inside any hyper-ball of radius r22 ,
max[E(
x?Rd
|N N (x,
n
r2
2
)|
)] ? 2?d p. Then
ni
ni
r2
or ?xi ? S s.t. | ? E( )| > ?V (r0 )]
2
n
n
ni
ni
r2
r2
0
0
0
and ?xi ? S s.t. | ? E( )| > ?V (r0 )]
? Pr[r > r] + Pr[r < ] + Pr[r ?
2
2
n
n
ni
ni
r2
r2
? Pr[|N N (b, r)| < K] + Pr[max |N N (x, )| > K] + n Pr[| ? E( )| > ?V (r0 )|r0 ? ]
d
2
n
n
2
x?R
N N (x, r22 )
ni
N N (b, r)
ni
r
2
| < p] + Pr[max |
| > p] + n Pr[| ? E( )| > ?V (r0 )|r0 ? ]
= Pr[|
n
n
n
n
2
x?Rd
Pr[r0 > r or r0 <
2 2
? e?2nc1 p + e?2n(1?2
?d 2 2
) p
+ 2ne?2n?
2
V (r 0 )2
where the last inequality is based on Hoeffding bound.
2 2
?d 2 2
2
0
Let e?2nc1 p ? 3? , e?2n(1?2 ) p ? 3? and 2ne?2n? V (r ) ? 2ne?2n?
n ? 2c21p2 log 3? , n ? 2(1?21?d )2 p2 log 3? , and n ? ?4 V (1r2 )4 log 3? . ?
1
2
V(
r2 2
2 )
?
?
3,
we obtain
2
Based on Claim 1, we get the following theorem, which shows the effectiveness of the proposed
method.
Main Theorem. If
1. Let B 2 be the hyper-ball centered at b with radius 2r. The minimum distance between
the points inside B and the ones outside B 2 is not too large, i.e. min{kxi ? xj k|xi , xj ?
S, kxi ? bk ? r, kxj ? bk > 2r} ? ?, where ? is a positive parameter.
1
Notice that here we are only dealing with the hard case where f1 (x) is positive within B. In the separable
case where the support regions of the two classes do not overlap, we can use other methods to detect the
minority class, such as the one proposed in [8].
3
p2 OV (
r2
,r)
2
, where ? ? 2d+1 V (r)
2. f1 (x) is locally smooth, i.e. ?x, y ? Rd , |f1 (x)?f1 (y)| ? ?kx?yk
2
?
r2
and OV ( 2 , r) is the volume of the overlapping region of two hyper-balls: one is of radius
r, the other one is of radius r22 , and its center is on the sphere of the bigger one.
3. The number of examples is sufficiently large,
i.e. n ? max{ 2c21p2 log 3? , 2(1?21?d )2 p2 log 3? , (1?p)4 ?14 V ( r2 )4 log 3? }.
1
2
then with probability at least 1 ? ?, after d 2?
r2 e iterations, NNDB will query at least one example
whose probability of coming from the minority class is at least 31 , and it will continue querying such
2d
? 2) ? ?r cth iteration.
examples until the b( p(1?p)
Proof. Based on Claim 1, using condition 3, if the number of examples is sufficiently large, then with
probability at least 1 ? ?, r22 ? r0 ? r and | nni ? E( nni )| ? (1 ? p)?V (r0 ), 1 ? i ? n. According to
n
condition 2, ?xi , xj ? S s.t. kxi ? bk > 2r, kxj ? bk > 2r and kxi ? xj k ? ?, E( nni ) and E( nj )
nj
ni
0
will not be affected by the minority class, and |E( n ) ? E( n )| ? (1 ? p)?V (r ) ? (1 ? p)?V (r).
Note that ? is always bigger than r. Based on the above inequalities, we have
nj
ni
nj
nj
ni
nj
ni
ni
| ? | ? | ? E( )| + | ? E( )| + |E( ) ? E( )| ? 3(1 ? p)?V (r) (1)
n
n
n
n
n
n
n
n
From inequality (1), it is not hard to see that ?xi , xj ? S, s.t. kxi ? bk > 2r and kxi ? xj k ? ?,
nj
ni
0
n ? n ? 3(1 ? p)?V (r), i.e. when tr = ?,
si
? 3(1 ? p)?V (r)
n
(2)
This is because if kxj ? bk ? 2r, the minority class may also contribute to
may be even smaller.
nj
n ,
and thus the score
On the other hand, based on condition 1, there exist two points xk , xl ? S, s.t. kxk ? bk ? r,
kxl ? bk > 2r, and kxk ? xl k ? ?. Since the contribution of the minority class to E( nnk ) is at least
r
p?OV ( 22 ,r)
,
V (r)
p?OV (
r2
,r)
p?OV (
r2
,r)
2
2
? (1 ? p)?V (r0 ) ?
? (1 ? p)?V (r). Since
so E( nnk ) ? E( nnl ) ?
V (r)
V (r)
ni
ni
0
for any example xi ? S, we have | n ? E( n )| ? (1 ? p)?V (r ) ? (1 ? p)?V (r), therefore
p ? OV ( r22 , r)
p ? OV ( r22 , r) 3(1 ? p)p2 ? OV ( r22 , r)
nl
nk
?
?
? 3(1 ? p)?V (r) ?
?
n
n
V (r)
V (r)
2d+1 V (r)
Since p is very small, p ?
3(1?p)p2
;
2d+1
therefore,
nk
n
?
nl
n
> 3(1 ? p)?V (r), i.e. when tr0 = ?,
sk
> 3(1 ? p)?V (r)
(3)
n
In Step 4 of the proposed method, we gradually enlarge the neighborhood to calculate the change of
local density. When tr0 = ?, based on inequalities (2) and (3), ?xi ? S, kxi ? bk > 2r, we have
sk > si . Therefore, in this round of iteration, we will pick an example from B 2 . In order for tr0 to
be equal to ?, the value of t would be d r?0 e ? d 2?
r2 e.
If we further increase t so that tr0 = c?, where c > 1, we have the following conclusion: ?xi , xj ?
n
S, s.t. kxi ? bk > 2r and kxi ? xj k ? c?, nni ? nj ? (c + 2)(1 ? p)?V (r), i.e. sni ? (c + 2)(1 ?
2
d
2
? 2, then ?xi ? S, kxi ? bk > 2r, sk > si ,
, i.e. c ? p(1?p)
p)?V (r). As long as p ? (c+2)(1?p)p
2d
2
0
and we will pick examples from B . Since r ? r, the method will continue querying examples in
2d
? 2) ? ?r cth iteration.
B 2 until the b( p(1?p)
Finally, we show that the probability of picking a minority class example from B 2 is at least 31 .
To this end, we need to calculate the maximum probability mass of the majority class within B 2 .
Consider the case where the maximum value of f1 (x) occurs at b, and this pdf decreases by ? every
time x moves away from b in the direction of the radius by ?, i.e. the shape of f1 (x) is a cone
1 (b)
= 1, where
in (d + 1) dimensional space. Since f1 (x) must integrate to 1, i.e. V ( ?f?1 (b) ) ? fd+1
V ( ?f?1 (b) ) is the volume of a hyper-ball with radius
4
?f1 (b)
? ,
1
d
d+1 ? d+1 .
we have f1 (b) = ( Vd+1
(?) )
Therefore, the probability mass of the majority class within B 2 is:
2r
2r ?
V (2r)(f1 (b) ? ?) +
V (2r) < V (2r)f1 (b)
?
? d+1
1
d
d
1
d + 1 d+1
V (r)
d+1 ? d+1
)
? d+1 = 2d
= V (2r)(
1 (d + 1)
V (?)
V (?) d+1
p2 ? OV ( r22 , r) d
) d+1 < 2p
V (r)
where V (2r) is the volume of a hyper-ball with radius 2r. Therefore, if we select a point at random
p
p
? p+2p
= 31 .
from B 2 , the probability that this point is from the minority class is at least p+(1?p)?2p
?
1
d
1
< (d + 1) d+1 (2d+1 V (r)?) d+1 ? (d + 1) d+1 (
2.4
Rare category detection for multiple classes
In subsection 2.2, we have discussed rare category detection for the binary case. In this subsection,
we focus on the case where m > 2. To be specific, let p1 , . . . , pm be the priors of the m classes,
and p1 ? pi , i 6= 1. Our goal is to use as few label requests as possible to find at least one example
from each class.
The method proposed in subsection 2.2 can be easily generalized to multiple classes, which is presented in Algorithm 2. In this algorithm, we are given the priors of all the minority classes. Using
each pi , we estimate the number Ki of examples from this class, and calculate the corresponding ri0
value in the same manner as NNDB. Then, we calculate the local density at each example based on
different scales ri0 . In the outer loop of Step 9, we calculate the r0 value which is the minimum of
all the ri0 whose corresponding classes have not been discovered yet and its index. In the inner loop
of Step 11, we gradually enlarge the neighborhood to calculate the score of each example. This is
the same as NNDB, except that we preclude the examples that are within a certain distance of any
selected example from being selected. This heuristic is to avoid repeatedly selecting examples from
the same discovered class. The inner loop stops when we find an example from an undiscovered
class. Then we will update the r0 value and resume the inner loop. If the minority classes form
compact clusters and are far apart from each other, NNDM is able to detect examples from each
minority class with a small number of label requests.
Algorithm 2 Nearest-Neighbor-Based Rare Category Detection for Multiple Classes (NNDM)
Require: S, p2 , . . . , pm
1: for i = 2 : m do
2:
Let Ki = npi .
3:
For each example, calculate the distance between this example and its Kith nearest neighbor.
Set ri0 to be the minimum value among all the examples.
4: end for
0
5: Let r10 = maxm
i=2 ri .
6: for i = 1 : m do
7:
?xj ? S, let N N (xj , ri0 ) = {x|x ? S, kx ? xj k ? ri0 }, and nij = |N N (xj , ri0 )|.
8: end for
9: while not all the classes have been discovered do
10:
Let r0 = min{ri0 |1 ? i ? m, and class i has not been discovered}, and s be the corresponding index, i.e. r0 = rs0 .
11:
for t = 1 : n do
12:
for each xi that has been selected and labeled yi , ?x ? S, s.t. kx ? xi k ? ry0 i , si = ??;
for all the other examples, si =
max 0 (nsi ? nsj ).
xj ?N N (xi ,tr )
13:
Query x = arg maxxi ?S si .
14:
If x belongs to a class that has not been discovered, break.
15:
end for
16: end while
In NNDB and NNDM, we need the priors of the minority classes as the input. As we will see in the
next section, our algorithms are robust against small perturbations in the priors.
5
3
Experimental results
In this section, we compare our methods (NNDB and NNDM) with the best method proposed in [8]
(Interleave) and random sampling (RS) on both synthetic and real data sets. In Interleave, we use
the number of classes as the number of components in the mixture model. For both Interleave and
RS, we run the experiment multiple times and report the average results.
3.1
Synthetic data sets
Figure 1(a) shows a synthetic data set where the pdf of the majority class is Gaussian and the pdf of
the minority class is uniform within a small hyper-ball. There are 1000 examples from the majority
class and only 10 examples from the minority class. Using Interleave, we need to label 35 examples,
using RS, we need to label 101 examples, and using NNDB, we only need to label 3 examples in
order to sample one from the minority class, which are denoted as ?x? in Figure 1(b). Notice that
the first 2 examples that NNDB selects are not from the correct region. This is because the number
of examples from the minority class is very small, and the local density may be affected by the
randomness in the data.
5
5
4
4
3
3
2
2
1
1
0
0
?1
?1
?3
?2
?1
0
1
2
3
?3
4
?2
?1
0
1
2
3
4
(b) Examples Selected by NNDB, denoted as ?x?
(a) Data Set
Figure 1: Synthetic Data Set 1.
In Figure 2(a), the X-shaped data consisting of 3000 examples correspond to the majority class, and
the four characters ?NIPS? correspond to four minority classes, which consist of 138, 79, 118, and
206 examples respectively. Using Interleave, we need to label 1190 examples, using RS, we need
to label 83 examples, and using NNDM, we only need to label 5 examples in order to get one from
each of the minority classes, which are denoted as ?x? in Figure 2(b). Notice that in this example,
Interleave is even worse than RS. This might be because some minority classes are located in the
region where the density of the majority class is not negligible, and thus may be ?explained? by the
majority-class mixture-model component.
3.2
Real data sets
In this subsection, we compare different methods on two real data sets: Abalone [3] and Shuttle [4].
The first data set consists of 4177 examples, described by 7 dimensional features. The examples
come from 20 classes: the proportion of the largest class is 16.50%, and the proportion of the
smallest class is 0.34%. For the second data set, we sub-sample the original training set to produce
a smaller data set with 4515 examples, described by 9 dimensional features. The examples come
from 7 classes: the proportion of the largest class is 75.53%, and the proportion of the smallest class
is 0.13%.
The comparison results are shown in Figure 3(a) and Figure 3(b) respectively. From these figures,
we can see that NNDM is significantly better than Interleave and RS: with Abalone data set, to find
6
200
200
180
180
160
160
140
140
120
120
100
100
80
80
60
60
40
40
20
20
0
0
0
50
100
150
200
0
250
50
100
150
200
250
(b) Examples Selected by NNDM, denoted as ?x?
(a) Data Set
Figure 2: Synthetic Data Set 2.
all the classes, Interleave needs 280 label requests, RS needs 483 label requests, and NNDM only
needs 125 label requests; with Shuttle data set, to find all the classes, Interleave needs 140 label
requests, RS needs 512 label requests, and NNDM only needs 87 label requests. This is because
as the number of components becomes larger, the mixture model generated by Interleave is less
reliable due to the lack of labeled examples, thus we need to select more examples. Furthermore,
the majority and minority classes may not be near-separable, which is a disaster for Interleave. On
the other hand, NNDM does not assume a generative model for the data, and only focuses on the
change in local density, which is more effective on the two data sets.
20
7
Classes Discovered
Classes Discovered
6
15
10
NNDM
Interleave
RS
5
5
4
NNDM
Interleave
3
RS
2
0
0
100
200
300
400
Number of Selected Examples
1
0
500
(a) Abalone
100
200
300
400
500
Number of Selected Examples
600
(b) Shuttle
Figure 3: Learning Curves for Real Data Sets
3.3
Imprecise priors
The proposed algorithms need the priors of the minority classes as input. In this subsection, we test
the robustness of NNDM against modest mis-estimations of the class priors. The performance of
NNDB is similar to NNDM, so we omit the results here. In the experiments, we use the same data
sets as in subsection 3.2, and add/subtract 5%, 10%, and 20% from the true priors of the minority
classes. The results are shown in Figure 4. From these figures, we can see that NNDM is very robust
to small perturbations in the priors. For example, with Abalone data set, if we subtract 10% from
the true priors, only one more label request is needed in order to find all the classes.
7
20
7
Classes Discovered
Classes Discovered
6
15
?5%
?10%
?20%
0
+5%
+10%
+20%
10
5
0
0
50
100
150
200
Number of Selected Examples
5
?5%
?10%
?20%
0
+5%
+10%
+20%
4
3
2
1
0
250
20
40
60
80
Number of Selected Examples
(a) Abalone
100
(b) Shuttle
Figure 4: Robustness Study
4
Conclusion
In this paper, we have proposed a novel method for rare category detection, useful for de-novo active
learning in serious applications. Different from existing methods, our method does not rely on the
assumption that the data is near-separable. It works by selecting examples corresponding to regions
with the maximum change in local density, and depending on scaling, it will select class-boundary
or class-internal samples of minority classes. The method could be scaled up using kd-trees [7]. The
effectiveness of the proposed method is guaranteed by theoretical justification, and its superiority
over existing methods is demonstrated by extensive experimental results on both synthetic and real
data sets. Moreover, it is very robust to modest perturbations in estimating true class priors.
Acknowledgments
This paper is based on work in part supported by the Defense Advanced Research Projects Agency
(DARPA) under contract number NBCHD030010.
References
[1] M. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Proc. of the 23rd Int.
Conf. on Machine Learning, pages 65?72, 2006.
[2] S. Bay, K. Kumaraswamy, M. Anderle, R. Kumar, and D. Steier. Large scale detection of
irregularities in accounting data. In Proc. of the 6th Int. Conf. on Data Mining, pages 75?86,
2006.
[3] C. Blake and C. Merz.
Uci repository of machine learning databases.
http://www.ics.uci.edu/ machine/MLRepository.html, 1998.
[4] P.
Brazdil
and
J.
Gama.
Statlog
repository.
http://www.niaad.liacc.up.pt/old/statlog/datasets/shuttle/shuttle.doc.html, 1991.
In
In
[5] S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural
Information Processing Systems 19, 2005.
[6] S. Fine and Y. Mansour. Active sampling for multiple output identification. In The 19th Annual
Conf. on Learning Theory, pages 620?634, 2006.
[7] A. Moore. A tutorial on kd-trees. Technical report, University of Cambridge Computer Laboratory, 1991.
[8] D. Pelleg and A. Moore. Active learning for anomaly and rare-category detection. In Advances
in Neural Information Processing Systems 18, 2004.
8
| 3344 |@word repository:2 interleave:14 proportion:8 open:1 r:10 accounting:1 pick:2 asks:1 tr:3 score:7 selecting:2 undiscovered:1 existing:5 current:2 beygelzimer:1 si:9 yet:2 must:1 shape:1 update:1 generative:1 selected:12 xk:1 record:2 detecting:3 coarse:1 contribute:1 location:1 firstly:1 c2:3 differential:1 nnk:2 prove:1 consists:1 inside:3 manner:1 introduce:2 theoretically:1 tr0:4 expected:2 roughly:1 p1:2 preclude:1 becomes:1 project:1 estimating:1 bounded:1 moreover:1 mass:2 agnostic:1 astronomy:1 finding:1 nj:11 sni:1 sky:1 every:1 jgc:1 scaled:1 omit:1 superiority:2 positive:4 before:1 negligible:1 local:16 might:2 practical:1 acknowledgment:1 irregularity:1 significantly:1 imprecise:1 fraud:4 get:2 interior:3 unlabeled:2 brazdil:1 context:1 www:2 optimize:1 jaime:1 demonstrated:1 center:1 survey:1 simplicity:1 legitimate:1 dominate:2 financial:1 justification:3 pt:1 anomaly:1 located:1 labeled:3 database:1 calculate:9 nc1:2 region:12 decrease:1 yk:1 agency:1 complexity:2 ov:9 tight:1 negatively:1 f2:3 liacc:1 kxj:3 easily:1 darpa:1 distinct:1 effective:2 query:3 labeling:2 hyper:11 neighborhood:4 outside:1 whose:2 heuristic:1 larger:1 solve:1 otherwise:3 novo:3 propose:1 coming:1 neighboring:2 uci:2 loop:4 intuitive:1 cluster:3 produce:1 object:2 depending:1 nearest:10 school:2 p2:8 c:2 come:3 indicate:1 direction:1 radius:11 correct:1 centered:3 require:3 f1:15 statlog:2 secondly:1 around:2 sufficiently:2 blake:1 normal:2 ic:1 scope:1 claim:4 smallest:2 estimation:1 proc:2 label:19 currently:1 grouped:1 maxm:1 correctness:1 largest:2 gaussian:2 always:1 aim:1 avoid:1 shuttle:6 focus:3 detect:3 stopping:1 nn:1 typically:2 selects:2 arg:2 among:2 html:2 denoted:5 priori:1 equal:1 once:2 never:1 shaped:1 enlarge:3 sampling:6 unsupervised:1 future:1 np:1 report:2 develops:1 serious:1 few:4 tightly:1 consisting:1 detection:19 fd:1 mining:2 highly:1 truly:1 mixture:4 nl:2 modest:2 tree:3 old:1 re:1 theoretical:3 nij:1 instance:1 rare:16 uniform:2 too:2 kxi:10 synthetic:8 density:16 contract:1 picking:1 hoeffding:1 worse:1 conf:3 actively:1 de:3 int:2 break:2 start:1 npi:1 contribution:1 ni:22 correspond:3 identify:1 resume:1 identification:1 randomness:1 definition:1 against:2 proof:3 mi:1 sampled:1 stop:3 proved:2 knowledge:1 subsection:7 organized:1 though:1 furthermore:1 until:2 langford:1 hand:2 overlapping:1 lack:1 true:3 assigned:1 moore:2 laboratory:1 round:1 skewed:1 self:1 mlrepository:1 abalone:5 criterion:1 generalized:1 pdf:6 balcan:1 image:1 novel:2 volume:6 discussed:1 he:1 mellon:2 significant:1 cambridge:1 smoothness:1 rd:7 consistency:1 pm:2 add:1 belongs:1 apart:1 certain:4 inequality:4 binary:4 continue:3 yi:5 minimum:6 r0:22 multiple:6 smooth:5 technical:1 sphere:1 long:1 bigger:2 variant:1 basic:1 essentially:1 cmu:2 iteration:7 disaster:1 c1:3 fine:1 rest:1 subject:1 effectiveness:2 nsj:1 near:3 affect:1 fit:1 xj:14 inner:3 idea:1 requesting:1 bottleneck:1 defense:1 explainable:1 repeatedly:2 useful:1 locally:4 concentrated:2 category:17 simplest:1 http:2 exist:1 tutorial:1 notice:4 r22:8 carnegie:2 dasgupta:1 affected:2 four:2 falling:2 r10:1 pelleg:2 fraction:1 cone:1 run:1 draw:1 doc:1 scaling:1 bound:4 ki:2 guaranteed:1 nni:6 oracle:2 annual:1 occur:1 ri:1 sake:1 min:2 kumar:1 separable:5 ri0:8 according:2 ball:11 request:9 kd:3 smaller:2 separability:2 character:1 cth:2 outlier:3 intuitively:1 pr:12 gradually:3 explained:1 count:1 needed:1 end:6 apply:1 away:1 generic:1 robustness:2 original:1 remaining:1 giving:1 especially:1 move:1 realized:1 occurs:1 strategy:1 distance:7 majority:17 vd:1 outer:1 carbonell:1 topic:1 minority:44 index:2 potentially:1 upper:1 datasets:1 situation:1 precise:1 discovered:9 perturbation:3 mansour:1 bk:11 required:1 extensive:1 nbchd030010:1 nip:1 address:1 beyond:1 able:2 pattern:1 challenge:2 kxl:1 max:7 reliable:1 explanation:1 overlap:2 rely:1 advanced:1 ne:3 prior:13 literature:1 discovery:1 gama:1 enclosed:1 querying:2 integrate:1 pi:2 nsi:1 supported:1 last:1 neighbor:10 sparse:1 boundary:3 calculated:2 xn:1 world:1 curve:1 author:2 far:1 transaction:5 compact:3 dealing:1 active:10 assumed:1 conclude:1 xi:22 sk:3 why:1 bay:1 robust:3 main:1 x1:1 scattered:1 sub:1 xl:2 nnl:1 maxxi:2 theorem:3 enlarging:1 specific:2 r2:20 consist:1 importance:1 kx:4 nk:2 subtract:3 explore:1 forming:1 kxk:2 corresponds:1 chance:1 goal:2 hard:3 change:8 except:1 reducing:1 total:1 experimental:4 merz:1 select:5 internal:1 support:4 |
2,586 | 3,345 | The Value of Labeled and Unlabeled Examples when
the Model is Imperfect
Mikahil Belkin
Dept. of Computer Science and Engineering
Ohio State University
Columbus, OH 43210
mbelkin@cse.ohio-state.edu
Kaushik Sinha
Dept. of Computer Science and Engineering
Ohio State University
Columbus, OH 43210
sinhak@cse.ohio-state.edu
Abstract
Semi-supervised learning, i.e. learning from both labeled and unlabeled data has
received significant attention in the machine learning literature in recent years.
Still our understanding of the theoretical foundations of the usefulness of unlabeled data remains somewhat limited. The simplest and the best understood situation is when the data is described by an identifiable mixture model, and where
each class comes from a pure component. This natural setup and its implications
ware analyzed in [11, 5]. One important result was that in certain regimes, labeled
data becomes exponentially more valuable than unlabeled data.
However, in most realistic situations, one would not expect that the data comes
from a parametric mixture distribution with identifiable components. There have
been recent efforts to analyze the non-parametric situation, for example, ?cluster?
and ?manifold? assumptions have been suggested as a basis for analysis. Still,
a satisfactory and fairly complete theoretical understanding of the nonparametric
problem, similar to that in [11, 5] has not yet been developed.
In this paper we investigate an intermediate situation, when the data comes from a
probability distribution, which can be modeled, but not perfectly, by an identifiable
mixture distribution. This seems applicable to many situation, when, for example,
a mixture of Gaussians is used to model the data. the contribution of this paper is
an analysis of the role of labeled and unlabeled data depending on the amount of
imperfection in the model.
1 Introduction
In recent years semi-supervised learning, i.e. learning from labeled and unlabeled data, has drawn
significant attention. The ubiquity and easy availability of unlabeled data together with the increased
computational power of modern computers, make the paradigm attractive in various applications,
while connections to natural learning make it also conceptually intriguing. See [15] for a survey on
semi-supervised learning.
From the theoretical point of view, semi-supervised learning is simple to describe. Suppose the data
is sampled from the joint distribution p(x, y), where x is a feature and y is the label. The unlabeled
data comes from the marginal distribution p(x). Thus the the usefulness of unlabeled data is tied
to how much information about joint distribution can be extracted from the marginal distribution.
Therefore, in order to make unlabeled data useful, an assumption on the connection between these
distributions needs to be made.
1
In the non-parametric setting several such assumptions have been recently proposed, including the
the cluster assumption and its refinement, the low-density separation assumption [7, 6], and the
manifold assumption [3]. These assumptions relate the shape of the marginal probability distribution
to class labels. The low-density separation assumption states that the class boundary passes through
the low density regions, while the manifold assumption proposes that the proximity of the points
should be measured along the data manifold. However, while these assumptions has motivated
several algorithms and have been shown to hold empirically, few theoretical results on the value of
unlabeled data in the non-parametric setting are available so far. We note the work of Balcan and
Blum ([2]), which attempts to unify several frameworks by introducing a notion of compatibility
between labeled and unlabeled data. In a slightly different setting some theoretical results are also
available for co-training ([4, 8]).
Far more complete results are available in the parametric setting. There one assumes that the distribution p(x, y) is a mixture of two parametric distribution p1 and p2 , each corresponding to a
different class. Such mixture is called identifiable, if parameters of each component can be uniquely
determined from the marginal distribution p(x). The study of usefulness of unlabeled data under this
assumption was undertaken by Castelli and Cover ([5]) and Ratsaby and Venkatesh ([11]). Among
several important conclusions from their study was the fact under a certain range of conditions,
labeled data is exponentially more important for approximating the Bayes optimal classifier than
unlabeled data. Roughly speaking, unlabeled data may be used to identify the parameters of each
mixture component, after which the class attribution can be established exponentially fast using only
few labeled examples.
While explicit mixture modeling is of great theoretical and practical importance, in many applications there is no reason to believe that the model provides a precise description of the phenomenon.
Often it is more reasonable to think that our models provide a rough approximation to the underlying
probability distribution, but do not necessarily represent it exactly. In this paper we investigate the
limits of usefulness of unlabeled data as a function of how far the best fitting model strays from the
underlying probability distribution.
The rest of the paper is structured as follows: we start with an overview of the results available for
identifiable mixture models together with some extensions of these results. We then describe how
the relative value of labeled and unlabeled data changes when the true distribution is a perturbation
of a parametric model. Finally we discuss various regimes of usability for labeled and unlabeled
data and represent our findings in Fig 1.
2 Relative Value of Labeled and Unlabeled Examples
Our analysis is conducted in the standard classification framework and studies the behavior P error ?
PBayes , where Perror is probability of misclassification for a given classifier and PBayes is the
classification error of the optimal classifier. The quantity Perror ? PBayes is often referred to as the
excess probability of error and expresses how far our classifier is from the best possible.
In what follows, we review some theoretical results that describe behavior of the excess error probability as a function of the number of labeled and unlabeled examples. We will denote number of
labeled examples by l and the number of unlabeled examples by u. We omit certain minor technical details to simplify the exposition. The classifier, for which Perror is computed is based on the
underlying model.
Theorem 2.1. (Ratsaby and Venkatesh [11]) In a two class identifiable mixture model, let the
equiprobable class densities p1 (x), p2 (x) be d-dimensional Gaussians with unit covariance
matri
?1
ces. Then
for
sufficiently
small
>
0
and
arbitrary
?
>
0,
given
l
=
O
log
?
labeled
and
u=O
d2
?1
3 ? (d log
+ log ? ?1 ) unlabeled examples respectively, with confidence at least 1 ? ?,
probability of error Perror ? PBayes (1 + c) for some positive constant c.
Since the mixture is identifiable, parameters can be estimated from unlabeled examples alone. Labeled examples are not required for this purpose. Therefore, unlabeled examples are used to estimate
the mixture and hence the two decision regions. Once the decision regions are established, labeled
examples are used to label them. An equivalent form
of the above result in terms of labeled and
d
unlabeled examples is Perror ? PBayes = O u1/3
+ O (exp(?l)). For a fixed dimension d, this
2
indicates that labeled examples are exponentially more valuable than the unlabeled examples in reducing the excess probability of error, however, when d is not fixed, higher dimensions slower these
rates.
Independently, Cover and Castelli provided similar results in a different setting under Bayesian
framework.
Theorem 2.2. (Cover and Castelli [5]) In a two class mixture model, let p 1 (x), p2 (x) be the parametric class densities and let h(?) be the prior over the unknown mixing parameter ?. Then
Perror ? PBayes
1
=O
+ exp{?Dl + o(l)}
u
p
Rp
where D = ? log{2 ?(1 ? ?)
p1 (x)p2 (x)dx}
In their framework, Cover and Castelli [5] assumed that parameters of individual class densities
are known, however the associated class labels and mixing parameter are unknown. Under such
assumption their result shows that the above rate is obtained when l 3+ u?1 ? 0 as l + u ? ?. In
particular this implies that, if ue?Dl ? 0 and l = o(u) the excess error is essentially determined by
the number of unlabeled examples. On the other hand if u grows faster than e Dl , then excess error
is determined by the number of labeled examples. For detailed explanation of the above statements
see pp-2103 [5]. The effect of dimensionality is not captured in their result.
Both results indicate that if the parametric model assumptions are satisfied, labeled examples are
exponentially more valuable than unlabeled examples in reducing the excess probability of error.
In this paper we investigate the situation when the parametric model assumptions are only satisfied
to a certain degree of precision, which seems to be a natural premise in a variety of practical settings.
It is interesting to note that uncertainty can appear for different reasons. One source of uncertainty
is a lack of examples, which we call Type-A. Imperfection of the model is another source of uncertainty, which we will refer to as Type-B.
? Type-A uncertainty for perfect model with imperfect information: Individual class
densities follow the assumed parametric model. Uncertainty results from finiteness of examples. Perturbation size specifies how well parameters of the individual class densities
can be estimated from finite data.
? Type-B uncertainty for imperfect model: Individual class densities does not follow the
assumed parametric model. Perturbation size specifies how well the best fitting model can
approximate the underlying density.
Before proceeding further, we describe our model and notations. We take the instance space X ? R d
with labels {?1, 1}. True class densities are always represented by p 1 (x) and p2 (x) respectively. In
case of Type-A uncertainty they are simply p1 (x|?1 ) and p2 (x|?2 ). In case of Type-B uncertainty
p1 (x), p2 (x) are perturbations of two d-dimensional densities from a parametric family F. We will
denote the mixing parameter by t and the individual parametric class densities by f 1 (x|?1 ), f2 (x|?2 )
respectively and the resulting mixture density as tf1 (x|?1 ) + (1 ? t)f2 (x|?2 ). We will show some
specific results when F consists of spherical Gaussian distributions with unit covariance matrix and
t = 21 . In such a case ?1 , ?2 ? Rd represent the means of the corresponding densities and the
mixture density is indexed by a 2d dimensional vector ? = [?1 , ?2 ]. The class of such mixtures is
identifiable and hence using unlabeled examples alone, ? can be estimated by ?? ? R2d . By || ? || we
represent the standard Euclidean norm in Rd and by || ? || d ,2 the Sobolev norm. Note that for some
2
> 0, || ? || d ,2 < implies || ? ||1 < and || ? ||? < . We will frequently use the following term
2
L(a, t, e) =
(t?Ae)(1?2
?
log( a
?)
(PBayes +Be)(1?PBayes ?Be))
to represent the optimal number of labeled
examples for correctly classifying estimated decision regions with high probability (as will be clear
in the next section) where, t represents mixing parameter, e represents perturbation size and a is an
integer variable and A, B are constants.
3
2.1 Type-A Uncertainty : Perfect Model Imperfect Information
Due to finiteness of unlabeled examples, density parameters can not be estimated arbitrarily close to
the true parameters in terms of Euclidean norm. Clearly, how close they can be estimated depends
on the number of unlabeled examples used u, dimension d and confidence probability ?. Thus,
Type-I uncertainty inherently gives rise to a perturbation size defined by 1 (u, d, ?) such that, a fixed
u defines a perturbation size 1 (d, ?). Because of this perturbation, estimated decision regions differ
from the true decision regions. From [11] it is clear that only very few labeled examples are good
enough to label these two estimated decision regions reasonably well with high probability. Let
such a number of labeled examples be l ? . But what happens if the number of labeled examples
available is greater than l ? ? Since the individual densities follow the parametric model exactly,
these extra labeled examples can be used to estimate the density parameters and hence the decision
regions. However, using a simple union boundit can be shown
([10]) that the asymptotic rate for
q
d
d
convergence of such estimation procedure is O
l log( ? ) . Thus, provided we have u unlabeled
examples if we want to represent the rate at which excess probability of error reduces as a function
of the number of labeled examples, it is clear that initially the error reduces exponentially
fast
in
q
d
d
number of labeled examples (following [11]) but then it reduces only at a rate O
l log( ? ) .
Provided we use the following strategy, this extends the result of [11] as given in the Theorem
below.
We adopt the following strategy to utilize labeled and unlabeled examples in order to learn a classification rule.
Strategy 1:
1. Given u unlabeled examples, and confidence probability ? > 0 use maximum likelihood
estimation method to learn the parameters
of the mixture model such that the estimates
d
??1 , ??2 are only 1 (u, d, ?) = O? u1/3
close to the actual parameters with probability at
least 1 ? ?4 .
2. Use l? labeled examples to label the estimated decision regions with probability of incorrect
labeling no greater than ?4 .
3. If l > l? examples are available use them to estimate the individual density parameters with
probability at least 1 ? 2? .
Theorem 2.3. Let the model be a mixture of two equiprobable d dimensional spherical Gaussians
p1 (x|?1 ), p2 (x|?2 ) having unit covariance matrices and means ?1 , ?2 ? Rd . For any arbitrary
1 > ? > 0, if strategy 1 is used with u unlabeled examples then there exists a perturbation size
1 (u, d, ?) > 0 and positive constants A, B such that using l ? l ? = L(24, 0.5, 1) labeled examples, Perror ? PBayes reduces exponentially fast in the number of labeled examples with probability
?
with probability
at least (1 ? 2? ). If more labeled examples l > l ? are provided then
q
at least (1 ? 2 ),
d
d
Perror ? PBayes asymptotically converges to zero at a rate O
as l ? ?. If we
l log( ? )
represent the reduction rate of this excess error(Perror ? PBayes ) as a function of labeled examples
Ree (l), then this can be compactly represented as,
?
?
? O (exp(?l))
if l ? l
q
Ree (l) =
d
d
if l > l?
? O
l log( ? )
After using l? labeled examples Perror = PBayes + O(1 ).
2.2 Type-B Uncertainty: Imperfect Model
In this section we address the main question raised in this paper. Here the individual class densities
do not follow the assumed parametric model exactly but are a perturbed version of the assumed
model. The uncertainty in this case is specified by the perturbation size 2 which roughly indicates
by what extent the true class densities differ form that of the best fitting parametric model densities.
4
For any mixing parameter t ? (0, 1) let us consider a two class mixture model with individual class
densities p1 (x), p2 (x) respectively. Suppose the best knowledge available about this mixture model
is that individual class densities approximately follow some parametric form from a class F. We
assume that best approximations of p1 , p2 within F are f1 (x|?1 ), f2 (x|?2 ) respectively, such that
d
for i ? {1, 2}, (fi ? pi ) are in Sobolev class H 2 and there exists a perturbation size 2 > 0 such
that ||p1 ? f1 || d ,2 ? 2 and ||p2 ? f2 || d ,2 ? 2 . Here, the Sobolev norm is used as a smoothness
2
2
condition and implies that true densities are smooth and not ?too different? from the best fitting
parametric model densities and in particular, if ||fi ? pi || d ,2 ? 2 then ||fi ? pi ||1 ? 2 and
2
||fi ? pi ||? ? 2 .
We first show that due to the presence of this perturbation size, even complete knowledge of the
best fitting model parameters does not help in learning optimal classification rule in the following
sense. In the absence of any perturbation, complete knowledge of model parameters implies that
the decision boundary and hence two decision regions are explicitly known but not their labels.
Thus, using only a very small number of labeled examples Perror reduces exponentially fast in the
number of labeled examples to PBayes as number of labeled examples increases. However, due to
the presence of perturbation size, Perror reduces exponentially fast in number of labeled examples
only up to PBayes + O(2 ). Since beyond this, parametric model assumptions do not hold due to the
presence of perturbation size, some non parametric technique must be used to estimate the actual
decision boundary. For any such nonparametric technique P error now reduces at a much slower rate.
This trend is roughly what the following theorem says. Here f1 , f2 are general parametric densities
?
not necessarily Gaussians. In what follows we assume
that p1 , p2 ? C and hence convergence rate
for non parametric classification (see [14]) is O ?1l . Slower rate results if infinite differentiability
condition is not satisfied.
Theorem 2.4. In a two class mixture model with individual class densities p 1 (x), p2 (x) and mixing
parameter t ? (0, 1), let the mixture density of best fitting parametric model be tf 1 (x|?1 ) + (1 ?
t)f2 (x|?2 ) where f1 , f2 belongs to some parametric class F and true densities p1 , p2 are perturbed
version of f1 , f2 . For a perturbation size 2 > 0, if ||f1 ? p1 || d ,2 ? 2 , ||f2 ? p2 || d ,2 ? 2
2
2
and ?1 , ?2 are known then for any 0 < ? < 1, there exists positive constants A, B such that for
l ? l? = L(6, t, 2 ) labeled example, Perror ? PBayes reduces exponentially fast in the number of
?
labeled examples with probability at least (1 ? ?). If more labeled
examples l > l are provided
Perror ? PBayes asymptotically converges to zero at a rate O ?1l as l ? ?.
After using l? labeled examples Perror = PBayes + O (2 ). Thus, from the above theorem it can
be thought that as labeled examples are added, initially the excess error reduces at a very fast rate
(exponentially in the number of labeled examples) until Perror ? PBayes = O (2 ). After that
the excess error reduces only polynomially fast in the number of labeled examples. In proving of
the above theorem we used first order Taylor series approximation to get an crude upper bound for
decision boundary movement. However, in case of a specific class of parametric densities such a
crude approximation may not be necessary. In particular, as we show next, if the best fitting model
is a mixture of spherical Gaussians where the boundary is linear hyperplane, explicit upper bound
of boundary movement can be found. In the following, we assume the class F to be a class of d
dimensional spherical Gaussians with identity covariance matrix. However, the true model is an
equiprobable mixture of perturbed versions of these individual class densities. As before, given
u unlabeled examples and l labeled examples we want a strategy to learn a classification rule and
analyze the effect of these examples and also of perturbation size 2 in reducing excess probability
of error.
One option to achieve this task is to use the unlabeled examples to estimate the true mixture density
1
1
2 p1 + 2 p2 , however number of unlabeled examples required to estimate mixture density using non
parametric kernel density estimation is exponential to the number of dimensions [10]. Thus, for
high dimensional data this is not an attractive option and also such an estimate does not provide any
clue as to where the decision boundary is. A better option will be to use the unlabeled examples
to estimate the best fitting Gaussians within F. Number of unlabeled examples needed to estimate
such a mixture of Gaussians is only polynomial in the number of dimensions [10] and it is easy
to show that the distance between the Bayesian decision function and the decision function due to
Gaussian approximation is at most 2 away in ||.|| d ,2 norm sense.
2
5
Now suppose we use the following strategy to use labeled and unlabeled examples.
Strategy 2:
1. Assume the examples are distributed according to a mixture of equiprobable Gaussians
with unit covariance matrices and apply maximum likelihood estimation method to find the
best Gaussian approximation of the densities.
2. Use small number of labeled examples l ? to label the two approximate decision regions
correctly with high probability.
3. If more (l > l? ) labeled examples are available, use them to learn a better decision function
using some nonparametric technique.
Theorem 2.5. In a two class mixture model with equiprobable class densities p 1 (x), p2 (x), let the
mixture density of the best fitting parametric model be 21 f1 (x|?1 ) + 12 f2 (x|?2 ) where f1 , f2 are
d dimensional spherical Gaussians with means ?1 , ?2 ? Rd and p1 , p2 are perturbed version of
f1 , f2 , such that for a perturbation size 2 > 0, ||f1 ? p1 || d ,2 ? 2 , ||f2 ? p2 || d ,2 ? 2 . For
2
2
any > 0 and
02 < ? < 1, thereexists positive constants A, B such that if strategy 2 is used
with u = O d3 ? (d log 1 + log ?1 ) unlabeled and l ? = L (0.5, 12, ( + 2 )) labeled examples
then for l ? l? , Perror ? PBayes reduces exponentially fast in the number of labeled examples
?
with probability at least (1 ? ?). If more labeled examples
l > l are provided, Perror ? PBayes
asymptotically converges to zero at most at a rate O
1
?
l
as l ? ?. If we represent the reduction
rate of this excess error (Perror ? PBayes ) as a function of labeled examples as Ree (l), then this
can compactly represented as,
(
O (exp(?l))
if l ? l ?
Ree (l) =
1
O ?l
if l > l?
After using l? labeled examples, Perror = PBayes + O( + 2 ). Note that when number of unlabeled examples is infinite, parameters of the best fitting model can be estimated arbitrarily well, i.e.,
? 0 and hence Perror ? PBayes reduces exponentially fast in the number of labeled examples
until Perror ? PBayes = O(2 ). On the other hand if = O(2 ), Perror ? PBayes still reduces
exponentially fast in the number of labeled examples until Perror ? PBayes = O(2 ). This implies
that O(2 ) close estimate of parameters of the best fitting model is ?good? enough. A more precise
estimate of parameters of the best fitting model using more unlabeled examples does not help reducing Perror ? PBayes at the same exponential rate beyond Perror ? PBayes = O(2 ). The following
Corollary states this important fact.
Corollary 2.6. For a perturbation size 2 > 0, let the best fitting model for a mixture of equiprobable
densities be a mixture of equiprobable
d dimensional
spherical Gaussians with unit covariance
2
matrices. If using u? = O d3 ? (d log 12 + log ?1 ) unlabeled examples parameters of the best
2
fitting model can be estimated O(2 ) close in Euclidean norm sense, then any additional unlabeled
examples u > u? does not help in reducing the excess error.
3 Discussion on different rates of convergence
In this section we discuss the effect of perturbation size 2 on the behavior of Perror ? PBayes and
its effect on controlling the value of labeled and unlabeled examples. Different combinations of
number of labeled and unlabeled examples give rise to four different regions where P error ? PBayes
behaves differently as shown in Figure 1 where the x axis corresponds to the number of unlabeled
examples and the y axis corresponds to the number of labeled examples.
Let u? be the number of unlabeled examples required to estimate the parameters of the best fitting
?
model O(2 ) close in Euclidean
3 norm sense. Using O notation to hide the log factors, according to
d
Theorem 2.5, u? = O? 3 . When u > u? , unlabeled examples have no role to play in reducing
2
Perror ? PBayes as shown in region II and part of III in Figure 1. For u ? u? , unlabeled examples
becomes useful only in region I and IV. When u? unlabeled examples are available to estimate the
parameters of the best fitting model O(2 ) close, let the number of labeled examples required to
6
label the estimated decision regions so that Perror ? PBayes = O(2 ) be l? . The figure is just for
graphical representation of different regions where Perror ? PBayes reduces at different rates.
l
l1?
IV : O
1
?
l
Non-parametric methods
III :
O?
?? ?
d
l
+O ?
?
d
u1/3
2
?
l?
I : O(exp(?l)) + O ?
d
u1/3
II : O(exp(?l))
u? = O ?
3
d
32
u
Figure 1: The Big Picture. Behavior of Perror ? PBayes for different labeled and unlabeled examples
3.1 Behavior of Perror ? PBayes in Region-I
In this region u ? u? unlabeled examples estimate the decision regions and lu? labeled examples,
which depends on u, are required
to correctly label these estimated regions. P error ?PBayes reduces
at a rate O (exp(?l)) + O d1 for u < u? and l < lu? . This rate can be interpreted as the rate
u3
at which unlabeled examples estimate the parameters of the best fitting model and rate at which
labeled examples correctly label these estimated decision regions. However, for small u estimation
of the decision regions will be bad and and corresponding lu? > l? . Instead of using these large
number labeled examples to label poorly estimated decision regions, they can instead be used to
estimate the parameters of the best fitting model and as will be seen next, this is precisely what
?
happens in region III.
in region I, l is restricted to l < l and Perror ? PBayes reduces at a rate
Thus
exp (?O(l)) + O
d
1
u3
.
3.2 Behavior of Perror ? PBayes in Region-II
In this section l ? l ? and u > u? . As shown in Corollary 2.6, using u? unlabeled examples
parameters of the best fitting model can be estimated O(2 ) close in Euclidean norm sense and more
precise estimate of the best fitting model parameters using more unlabeled examples u > u ? does
not help reducing Perror ? PBayes . Thus, unlabeled examples have no role to play in this region
and for small number of labeled examples l ? l ? , Perror ? PBayes reduces at a rate O (exp(?l)).
3.3 Behavior of Perror ? PBayes in Region-III
In this region u ? u? and hence model parameters have not been estimated O(2 ) close to the
parameters of the best fitting model. Thus, in some sense model assumptions are still valid and
there is a scope for better estimation of the parameters. Number of labeled examples available in
this region is greater than what is required for mere labeling the estimated decision regions using
u unlabeled examples and hence these excess labeled examples can be used to estimate the model
parameters. Note that once the parameters have been estimated O( 2 ) close to the parameters of the
best fitting model using labeled examples, parametric model assumptions are no longer valid. If l 1?
is the number of such labeled examples, then in this region l ? < l ? l1? . Also note that depending
on number of unlabeled examples u ? u? , l? , and l1? are not fixed numbers but will depend on
u. In
presence
of labeled examples alone, using Theorem 2.3, P error ? PBayes reduces at a rate
q
d
?
O
l . Since parameters are being estimated both using labeled and unlabeled examples, the
7
effective rate at which Perror ? PBayes reduces at this region can be thought of as the mean of the
two.
3.4 Behavior of Perror ? PBayes in Region-IV
In this region when u > u? , l > l? and when u ? u? , l > l1? . In either case, since the parameters of
the best fitting model have been estimated O(2 ) close to the parameters of the best fitting model,
parametric model assumptions are also no longer valid and excess labeled examples must be used in
nonparametric way. For nonparametric classification technique either one of the two basic families
of classifiers, plug-in classifiers or empirical risk minimization (ERM) classifiers can be used [13,
9]. A nice discussion on the rate and fast rate of convergence of both these types of classifiers
can be found in [1, 12]. The general convergence rate i.e. the rate at which expected value of
(Perror ? PBayes ) reduces is of the order O(l ?? ) as l ? ? where ? > 0 is some exponent and is
typically ? ? 0.5. Also it was shown in [14] that under general conditions this bound can not be
improved in a minimax sense. In particular it was shown that if the true densities belong to C ? class
then this rate is O( ?1l ). However, if infinite differentiability condition is not satisfied then this rate
is much slower.
Acknowledgements This work was supported by NSF Grant No 0643916.
References
[1] J. Y. Audibert and A. Tsybakov. Fast convergence rate for plug-in estimators under margin
conditions. In Unpublished manuscript, 2005.
[2] M-F. Balcan and A. Blum. A PAC-style model for learning from labeled and unlabeled data.
In 18th Annual Conference on Learning Theory, 2005.
[3] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds. Machine Learning, 56, Invited, Special Issue on Clustering:209?239, 2004.
[4] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In 11th
Annual Conference on Learning Theory, 1998.
[5] V. Castelli and T. M. Cover. The relative values of labeled and unlabeld samples in pattern recognition with an unknown mixing parameters. IEEE Trans. Information Theory,
42((6):2102?2117, 1996.
[6] O. Chapelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. NIPS,
15, 2002.
[7] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In 10th
International Workshop on Artificial Intelligence and Statistics, 2005.
[8] S. Dasgupta, M. L. Littman, and D. McAllester. PAC generalization bounds for co-training.
NIPS, 14, 2001.
[9] L. Devroye, L. Gyorfi, and G. Lugosi. A probabilistic theory of pattern recognition. Springer,
New York, Berlin, Heidelberg, 1996.
[10] J. Ratsaby. The complexity of learning from a mixture of labeled and unlabeled examples. In
Phd Thesis, 1994.
[11] J. Ratsaby and S. S. Venkatesh. Learning from a mixture of labeled and unlabeled examples
with parametric side information. In 8th Annual Conference on Learning Theory, 1995.
[12] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statist.,
32(1):135?166, 1996.
[13] V. N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
[14] Y. Yang. Minimax nonparametric classification- part I: Rates of convergence, part II: Model
selection for adaptation. IEEE Trans. Inf. Theory, 45:2271?2292, 1999.
[15] X. Zhu. Semi-supervised literature survey. Technical Report 1530, Department of Computer
Science, University of Wisconsin Madison, December 2006.
8
| 3345 |@word version:4 polynomial:1 seems:2 norm:8 d2:1 covariance:6 reduction:2 series:1 yet:1 intriguing:1 perror:40 dx:1 must:2 realistic:1 shape:1 alone:3 intelligence:1 provides:1 cse:2 along:1 scholkopf:1 incorrect:1 consists:1 fitting:24 expected:1 behavior:8 p1:15 frequently:1 roughly:3 spherical:6 actual:2 becomes:2 provided:6 underlying:4 notation:2 what:7 interpreted:1 developed:1 finding:1 exactly:3 classifier:10 unit:5 grant:1 omit:1 appear:1 positive:4 before:2 engineering:2 understood:1 limit:1 ware:1 ree:4 approximately:1 lugosi:1 co:3 limited:1 range:1 gyorfi:1 practical:2 union:1 procedure:1 empirical:1 thought:2 confidence:3 get:1 unlabeled:66 close:11 selection:1 risk:1 equivalent:1 attribution:1 attention:2 independently:1 survey:2 unify:1 pure:1 rule:3 estimator:1 oh:2 proving:1 notion:1 controlling:1 play:2 suppose:3 trend:1 recognition:2 labeled:77 role:3 region:35 pbayes:45 movement:2 valuable:3 complexity:1 littman:1 depend:1 f2:13 basis:1 compactly:2 joint:2 differently:1 various:2 represented:3 fast:13 describe:4 effective:1 artificial:1 labeling:2 say:1 niyogi:1 statistic:1 think:1 adaptation:1 combining:1 mixing:7 poorly:1 achieve:1 description:1 convergence:7 cluster:3 perfect:2 converges:3 help:4 depending:2 measured:1 minor:1 received:1 p2:19 come:4 implies:5 indicate:1 differ:2 mcallester:1 premise:1 f1:10 generalization:1 extension:1 hold:2 proximity:1 sufficiently:1 exp:9 great:1 scope:1 u3:2 adopt:1 purpose:1 estimation:6 applicable:1 label:13 tf:1 minimization:1 rough:1 clearly:1 imperfection:2 always:1 gaussian:3 corollary:3 indicates:2 likelihood:2 mbelkin:1 sense:7 typically:1 initially:2 compatibility:1 issue:1 classification:9 among:1 exponent:1 proposes:1 raised:1 special:1 fairly:1 marginal:4 once:2 having:1 represents:2 report:1 simplify:1 belkin:2 few:3 modern:1 equiprobable:7 individual:12 attempt:1 investigate:3 mixture:34 analyzed:1 implication:1 necessary:1 indexed:1 iv:3 euclidean:5 taylor:1 theoretical:7 sinha:1 increased:1 instance:1 modeling:1 cover:5 introducing:1 usefulness:4 conducted:1 too:1 perturbed:4 density:42 international:1 probabilistic:1 together:2 thesis:1 satisfied:4 r2d:1 style:1 availability:1 explicitly:1 audibert:1 depends:2 view:1 analyze:2 start:1 bayes:1 option:3 aggregation:1 contribution:1 identify:1 conceptually:1 bayesian:2 castelli:5 lu:3 mere:1 pp:1 associated:1 riemannian:1 sampled:1 mitchell:1 knowledge:3 dimensionality:1 manuscript:1 higher:1 supervised:8 follow:5 improved:1 just:1 tf1:1 until:3 hand:2 lack:1 defines:1 columbus:2 believe:1 grows:1 effect:4 true:10 hence:8 satisfactory:1 attractive:2 ue:1 uniquely:1 kaushik:1 complete:4 l1:4 balcan:2 ohio:4 recently:1 fi:4 behaves:1 empirically:1 overview:1 exponentially:14 belong:1 significant:2 refer:1 smoothness:1 rd:4 chapelle:2 longer:2 recent:3 hide:1 belongs:1 inf:1 certain:4 arbitrarily:2 captured:1 seen:1 greater:3 somewhat:1 additional:1 paradigm:1 semi:8 zien:1 ii:4 reduces:20 smooth:1 technical:2 faster:1 usability:1 plug:2 basic:1 ae:1 essentially:1 kernel:2 represent:8 want:2 source:2 finiteness:2 extra:1 rest:1 invited:1 pass:1 december:1 call:1 integer:1 presence:4 yang:1 intermediate:1 iii:4 easy:2 enough:2 variety:1 perfectly:1 imperfect:5 motivated:1 effort:1 speaking:1 york:2 useful:2 detailed:1 clear:3 amount:1 nonparametric:6 tsybakov:2 statist:1 differentiability:2 simplest:1 specifies:2 nsf:1 estimated:21 correctly:4 dasgupta:1 express:1 four:1 blum:3 drawn:1 d3:2 ce:1 undertaken:1 utilize:1 asymptotically:3 year:2 uncertainty:12 extends:1 family:2 reasonable:1 separation:3 sobolev:3 decision:23 bound:5 matri:1 identifiable:8 annual:3 precisely:1 u1:4 structured:1 department:1 according:2 combination:1 slightly:1 happens:2 restricted:1 erm:1 ratsaby:4 remains:1 discus:2 needed:1 available:10 gaussians:11 apply:1 away:1 ubiquity:1 slower:4 rp:1 assumes:1 clustering:1 graphical:1 madison:1 approximating:1 question:1 quantity:1 added:1 parametric:32 strategy:8 distance:1 berlin:1 manifold:5 extent:1 reason:2 devroye:1 modeled:1 setup:1 statement:1 relate:1 rise:2 unknown:3 upper:2 finite:1 situation:6 precise:3 perturbation:20 arbitrary:2 venkatesh:3 unpublished:1 required:6 specified:1 connection:2 established:2 nip:2 trans:2 address:1 beyond:2 suggested:1 below:1 pattern:2 regime:2 including:1 explanation:1 power:1 misclassification:1 natural:3 zhu:1 minimax:2 picture:1 axis:2 review:1 literature:2 understanding:2 prior:1 nice:1 acknowledgement:1 relative:3 asymptotic:1 wisconsin:1 expect:1 interesting:1 foundation:1 degree:1 classifying:1 pi:4 supported:1 side:1 distributed:1 boundary:7 dimension:5 valid:3 made:1 refinement:1 clue:1 far:4 polynomially:1 excess:15 approximate:2 assumed:5 learn:4 reasonably:1 inherently:1 heidelberg:1 necessarily:2 main:1 big:1 fig:1 referred:1 wiley:1 precision:1 stray:1 explicit:2 exponential:2 crude:2 tied:1 theorem:11 bad:1 specific:2 pac:2 dl:3 exists:4 workshop:1 vapnik:1 importance:1 phd:1 margin:1 simply:1 springer:1 corresponds:2 extracted:1 weston:1 identity:1 ann:1 exposition:1 absence:1 change:1 determined:3 infinite:3 reducing:7 hyperplane:1 called:1 dept:2 d1:1 phenomenon:1 |
2,587 | 3,346 | Robust Regression with Twinned Gaussian Processes
Andrew Naish-Guzman & Sean Holden
Computer Laboratory
University of Cambridge
Cambridge, CB3 0FD. United Kingdom
{agpn2,sbh11}@cl.cam.ac.uk
Abstract
We propose a Gaussian process (GP) framework for robust inference in which a
GP prior on the mixing weights of a two-component noise model augments the
standard process over latent function values. This approach is a generalization of
the mixture likelihood used in traditional robust GP regression, and a specialization of the GP mixture models suggested by Tresp [1] and Rasmussen and Ghahramani [2]. The value of this restriction is in its tractable expectation propagation
updates, which allow for faster inference and model selection, and better convergence than the standard mixture. An additional benefit over the latter method lies
in our ability to incorporate knowledge of the noise domain to influence predictions, and to recover with the predictive distribution information about the outlier
distribution via the gating process. The model has asymptotic complexity equal
to that of conventional robust methods, but yields more confident predictions on
benchmark problems than classical heavy-tailed models and exhibits improved
stability for data with clustered corruptions, for which they fail altogether. We
show further how our approach can be used without adjustment for more smoothly
heteroscedastic data, and suggest how it could be extended to more general noise
models. We also address similarities with the work of Goldberg et al. [3].
1 Introduction
Regression data are often modelled as noisy observations of an underlying process. The simplest
assumption is that all noise is independent and identically distributed (i.i.d.) zero-mean Gaussian,
such that a typical set of samples appears as a cloud around the latent function. The Bayesian framework of Gaussian processes [4] is well-suited to these conditions, for which all computations remain
tractable (see figure 1a). Furthermore, the Gaussian noise model enjoys the theoretical justification
of the central limit theorem, which states that the sum of sufficiently many i.i.d. random variables of
finite variance will be distributed normally. However, only rarely can perturbations affecting data in
the real world be argued to have originated in the addition of many i.i.d. sources. The random component in the signal may be caused by human or measurement error, or it may be the manifestation
of systematic variation invisible to a simplified model. In any case, if ever there is the possibility of
encountering small quantities of highly implausible data, we require robustness, i.e. a model whose
predictions are not greatly affected by outliers.
Such demands render the standard GP inappropriate: the light tails of the Gaussian distribution
cannot explain large non-Gaussian deviations, which either skew the mean interpolant away from
the majority of the data, or force us to infer an unreasonably large (global) noise variance (see
figure 1b). Robust methods use a heavy-tailed likelihood to allow the interpolant effectively to
favour smoothness and ignore such erroneous data. Figure 1c shows how this can be achieved using
a two-component noise model
2
2
p(yn |fn ) = (1 ? ?)N yn ; fn , ?R
+ ?N yn ; fn , ?O
,
(1)
1
(a)
(b)
(c)
(d)
Figure 1: Black dots show noisy samples from the sinc function. In panels (a) and (b), the behaviour of a GP with a Gaussian noise assumption is illustrated; the shaded region shows 95%
confidence intervals. The presence of a single outlier is highly influential in this model, but the
heavy-tailed likelihood (1) in panel (c) is more resilient. Unfortunately, even this model fails for
the cluster of outliers in panel (d). Here, grey lines show ten repeated runs of the EP inference
algorithm, while the black line and shaded region are their averaged mean and confidence intervals
respectively?grossly at odds with those of the latent generative model.
in which observations yn are Gaussian corruptions of fn , being drawn with probability ? from a
2
2
large variance outlier distribution (?O
? ?R
). Inference in this model is tractable, but impractical
for all but the smallest problems due to the exponential explosion of terms in products of (1).
In this paper, we address the more fundamental GP assumption of i.i.d. noise. Our research is motivated by observing how the predictive distribution suffers for heavy-tailed models when outliers
appear in bursts: figure 1d replicates figure 1c, but introduces an additional three outliers. All parameters were taken from the optimal solution to (c), but even without the challenge of hyperparameter
optimization there is now considerable uncertainty in the posterior since the competing interpretations of the cluster as signal or noise have similar posterior mass. Viewed another way, the tails of
the effective log likelihood of four clustered observations have approximately one-quarter the weight
of a single outlier, so the magnitude of the posterior peak associated with the robust solution is comparably reduced. One simple remedy is to make the tails of the likelihood heavier. However, since
the noise model is global, this has ramifications across the entire data space, potentially causing
underfitting elsewhere when real data are relegated to the tails. We can establish an optimal choice
for the parameters by gradient ascent on the marginal likelihood, but it is entirely possible that no
single setting will be universally satisfactory.
The model introduced in this paper, which we call the twinned Gaussian process (TGP), generalizes
the noise model (1) by using a GP gating function to choose between the ?real? and ?outlier distributions?: in regions of confidence, the tails can be made very light, encouraging the interpolant
to hug the data points tightly; more dubious observations can be treated appropriately by broadening the noise distribution in their vicinity. Our model is also a specialization of the GP mixtures
proposed by Tresp [1] and Rasmussen and Ghahramani [2]; indeed, the latter automatically infers
the correct number of components to use. One may therefore wonder what can possibly be gained
by restricting ourselves to a comparatively simple architecture. The answer is in the computational
overhead required for the different approaches, since these more general models require inference
by Monte Carlo methods. We argue that the two-component mixture is often a sensible distribution
for modelling real data, with a natural interpretation and the heavy tails required for robustness;
its weaknesses are exposed primarily when the noise distribution is not homoscedastic. The TGP
largely solves this problem, and allows inference by an efficient expectation propagation (EP) [5]
procedure (rather than resorting to more heavy duty Monte Carlo methods). Hence, provided a twocomponent mixture is likely to reflect adequately the noise on our data, the TGP will give similar
results to the generalized mixtures mentioned above, but at a fraction of the cost.
Goldberg et al. [3] suggest an approach to input-dependent noise in the spirit of the TGP, in which
the log variance on observations is itself modelled as a GP (the logarithm since noise variance is
a non-negative property). Inference is again analytically intractable, so Gibbs sampling is used
to generate noise vectors from the posterior distribution by alternately fitting the signal process
and fitting the noise process. A further stage of Gibbs sampling is required at each test point to
estimate the predictive variance, making testing rather slow. Model selection is even slower, and the
Metropolis-Hastings algorithm is suggested for updating hyperparameters.
2
2 Twinned Gaussian processes
Given a domain X and covariance function K(?, ?) ? X ? X ? R, a Gaussian process (GP) over
the space of real-valued functions of X specifies the joint distribution at any finite set X ? X :
p(f |X) = N (f ; 0 , Kf ) ,
where the f = {fn }N
n=1 are (latent) values associated with each xn ? X, and Kf is the Gram
matrix, the evaluation of the covariance function at all pairs (xi , xj ). We apply Bayes? rule to obtain
the posterior distribution over the f , given the observed X and y, which with the assumption of
i.i.d. Gaussian corrupted observations is also normally distributed. Predictions at X? are made by
marginalizing over f in the (Gaussian) joint p(f , f? |X, y, X? ). See [6] for a thorough introduction.
Robust GP regression is achieved by using a leptokurtic likelihood distribution, i.e. one whose tails
have more mass than the Gaussian. Common choices are the Laplace (or double exponential) distribution, Student?s t distribution, and the mixture model (1). In product with the prior, a heavy-tailed
likelihood over an outlying observation does not exert the strong pull on the posterior witnessed
with a light-tailed noise model. Kuss [7] describes how inference can be performed for all these
likelihoods, and establishes that in many cases their performance is broadly comparable. Since it
bears closest resemblance to the twinned GP, we are particularly interested in the mixture; however,
in section 4, we include results for the Laplace model: it is the heaviest-tailed log concave distribution, which guarantees a unimodal posterior and allows more reliable EP convergence. In any
case, all such methods make a global assumption about the noise distribution, and it is where this is
inappropriate that our model is most beneficial.
The graphical model for the TGP is shown in figure 2b. We augment the standard process over f
with another GP over a set of variables u; this acts as a gating function, probabilistically dividing
the domain between the real and outlier components of the noise model
2
2
p(yn |fn ) = ?(un )N yn ; fn , ?R
+ ?(?un )N yn ; fn , ?O
,
(2)
Z un
.
N (z ; 0 , 1) dz.
where ?(un ) =
??
In the TGP likelihood, we therefore mix two forms of Gaussian corruption, one strongly peaked at
the observation, the other a broader distribution which provides the heavy tails, in proportion determined by u(x). This makes intuitive sense; crucially to us, it retains the advantage of tractability
with respect to EP updates. The two priors may have quite different covariance structure, reflecting our different beliefs about correlations in the signal and in the noise domain. In addition, we
accommodate prior beliefs about the prevalence of outliers with a non-zero mean process on u,
p(u|X) = N (u ; mu , Ku )
p(f |X) = N (f ; 0 , Kf ) .
Our model can be understood as lying between two extremes: observe that we recover the heavytailed (mixture of Gaussians) GP by forcing absolute correlation in u and adjusting the mean of
the u-process to mu = ? ?1 (1 ? e); conversely, if we remove all correlations in u, we return to a
standard mixture model where independently we must decide to which component an input belongs.
3 Inference
We begin with a very brief account of EP; for more details, see [5, 8]. Suppose we have an intractable
distribution over f whose unnormalized form factorizes into a product of terms, such as a dense
Gaussian prior t0 (f , u) and a series of independent likelihoods {tn (yn |fn , un )}N
n=1 . EP constructs
the approximate posterior as a product of scaled site functions t?n . For computational tractability,
these sites are usually chosen from an exponential family with natural parameters ?, since in this
case their product retains the same functional form as its components. The Gaussian (?, ?) has a
natural parameterization (b, ?) = (??1 ?, ? 21 ??1 ). If the prior is of this form, its site function is
exact:
p(f , u|y) =
N
N
Y
Y
1
t0 (f , u)
tn (yn |fn , un ) ? q(f ; ?) = t0 (f , u)
zn t?n (fn , un ; ?n ),
Z
n=1
n=1
3
(3)
x1
x2
x3
xN
f1
f2
f3
fN
x1
x2
x3
xN
u1
u2
u3
uN
f1
y1
y2
y3
yN
y1
f2
f3
y2
(a)
y3
fN
yN
(b)
Figure 2: In panel (a) we show a graphical model for the Gaussian process. The data ordinates are x,
observations y, and the GP is over the latent f . The bold black lines indicate a fully-connected set.
Panel (b) shows a graphical model for the twinned Gaussian process (TGP), in which an auxiliary
set of hidden variables u describes the noisiness of the data.
where Z is the marginal likelihood and zn are the scale parameters. Ideally, we would choose ?
at the global minimum of some divergence measure d(pkq), but the necessary optimization is usu-
ally intractable. EP is an iterative procedure that finds a minimizer of KL p(f , u|y)kq(f , u; ?)
on a pointwise basis: at each iteration, we select a new site n, and from the product of the cavity distribution formed by the current marginal with the omission of that site, and the true likelin
\n
hood term tn , we obtain the so-called tilted
distribution q (fn , un ; ? ). A simpler optimization
n
\n
min?n KL q (fn , un ; ? )kq(fn , un ; ?) then fits only the parameters ?n : this is equivalent to moment matching between the two distributions, with scale zn chosen to match the zeroth-order moments. After each site update, the moments at the remaining sites are liable to change, and several
iterations may be required before convergence.
The priors over u and f are independent, but we expect correlations in the posterior after conditioning on observations. To understand this, consider a single observation (xn , yn ); in principle, it
admits two explanations corresponding to its classification as either ?outlier? or as ?real? data: in
general terms, either un > 0 and fn ? yn , or un < 0 and fn respects the global structure of the
signal. A diagram to assist the visualization of the behaviour of the posterior is provided in figure 3.
Now, recall that the prior over u and f is
!
u
mu
Ku
u
X =N
;
,
p
f
f
0
0
0
Kf
and the likelihood factorizes into a product of terms (2); our site approximations t?n are therefore
Gaussian in (fn , un ). Of importance for EP are the moments of the tilted distribution which we
seek to match. These are most easily obtained by differentiation of the zeroth moments ZR and ZO
of each component. We find
ZZ
Z ?
1 0
u
z
2
ZR =
?(u)N y ; f , ?R
N
; ? , ? dudf =
N
; ?,
+
?
dz;
2
f
y
0 ?R
f,u
0
zn
?u
A C
writing the inner Gaussian as N
;
,
, Z R = N (y ; ?f , BR ) ?(q),
yn
?f
C BR
where q =
?u + BCR (y ? ?f )
q
.
C2
A? B
R
The integral for the outlier component is similar; ZO = N (y ; ?f , BO ) ?(?q). With partial derivaZ
? 2 log Z
tives ? log
?? and ???T we are equipped for EP; algorithmic details appear in Seeger?s note [8]. For
efficiency, we make rank-two updates of the full approximate covariance on (f , u) during the EP
loop, and refresh the posterior at the end of each cycle to avoid loss of precision.
4
log p
prior
likelihood
posterior
EP
5
5
0
0
-5
-5
-10
-10
u
10
u
10
replacements
5
0
10
log p
5
0
10
f
-5
5
0
10
f
-5
5
0
5
f
u
0
-5
-5
-10
-10
-5
10
-5
10
10
10
5
5
0
0
-5
-5
-10
-10
-5
10
0
5
10
f
5
0
5
0
f
log p
0
5
0
f
prior
likelihood
posterior
EP
-5
10
5
10
log p
5
0
10
10
f
prior
likelihood
posterior
EP
-5
5
0
f
prior
likelihood
posterior
EP
-5
-5
f
u
-5
u
10
10
f
10
10
5
5
0
0
-5
-5
-10
-10
-5
5
0
u
5
f
u
0
u
-5
10
f
Figure 3: Using the twinned Gaussian process provides a natural resilience against clustered noisy
data. The left-hand column illustrates the behaviour of a fixed heavy-tailed likelihood for one,
two, four and five repeated observations at f = 5. (Outliers in real data are not necessarily so
tightly packed, but the symmetry of this approximation allows us to treat them as a single unit: by
?posterior?, for example, we mean the a posteriori belief in all the observations? (identical) latent
f .) The context is provided by the prior, which gives 95% confidence to data around f = 0 ? 2. The
top-left box illustrates how the influence of isolated outliers is mitigated by the standard mixture.
However, a repeated observation (box two on the left) causes the EP solution to collapse onto the
spike at the data (the log scale is deceptive: the second peak contributes only about 8% of the
posterior mass). The twinned GP better preserves the marginal distribution of f by maintaining a
joint distribution over both f and u: in the second and third columns respectively are contours of
the true log joint (we use a broad zero-mean prior on u) and that inferred by EP, together with the
marginal posterior over f . Only with a fifth observation?final box?is the context of f essentially
overruled by the TGP approximation. The thick bar in the central column marks the cross-section
corresponding to the unnormalized posterior from column one.
5
3.1 Predictions
If the outlier component describes nuisance noise that should be eliminated, we require at test inputs x? only the marginal distribution p(f? |x? , X, y), obtained by marginalizing over u in the full
(approximate) posterior
? uu ?
? uf
?u
u
?
?
N
;
, ?
:
? ff
?f
f
?
?f u ?
p(f? |x? , X, y) =
Z
p(f? |x? , f )p(f |X, y)df
?1 ?
?1
f
T
? f , k??
? N f? ; kTf? K?1
? kTf? K?1
f ?
f kf ? + kf ? Kf ?ff Kf kf ? .
The noise process may itself be of interest, in which case we need to marginalize over both u? and
f? in
!
!
ZZ
u
u
p(y? |x? , X, y) =
p y? x? ,
p
X, y dudf
f
f
!
!
ZZZZ
u?
u? u
u
? du? df? dudf .
?, ?
?
p y? x? ,
p
N
;?
f?
f? f
f
This distribution is no longer Gaussian, but its moments may be recovered easily by the same method
used to obtain moments of the tilted distribution.
EP provides in addition to the approximate moments of the posterior distribution an estimate of the
marginal likelihood and its derivatives with respect to kernel hyperparameters. Again, we refer the
interested reader to the algorithm presented in [8], adding here only that our implementation uses
2
2
log noise values on (?R
, ?O
) to allow for their unconstrained optimization.
3.2 Complexity
The EP loop
is dominated by the rank-two updates of the covariance. Each such update is
O (2N )2 , making every N iterations O(4N 3 ). The posterior refresh is O(8N 3 ) since it requires the inverse of a 2N ? 2N positive semi-definite matrix, most efficiently achieved through
Cholesky factorization (this Cholesky factor can be retained for use in calculating the approximate
log marginal likelihood). The total number of loops required for convergence of EP is typically independent of N , and can be upper bounded by a small constant, say 10, making the entire inference
process O(8N 3 ) = O(N 3 ). Thus, our algorithm has the same limiting time complexity as i.i.d. robust regression by EP, which admittedly masks the larger coefficient that appears in approximating
both u and f simultaneously. Additionally, the body of the EP loop is slightly slower, since the precision matrix in a standard GP can be obtained with a single division, whereas our model requires
the inversion of a 2 ? 2 matrix.
4 Experiments
We identify two general noise characteristics for which our model may be suitable. The first is
when the outlying observations can appear in clusters: we saw in figure 1d how these occurrences
affect the standard mixture model. In fact the problem is quite severe, since the multimodality of the
posterior impedes the convergence of EP, while the possibility of conflicting gradient information at
the optima hampers procedures for evidence maximization. In figure 4 we illustrate how the TGP
succeeds where the mixture and Laplace models fail; note how the mean process on u falls sharply
in the contaminated regions. This is a stable solution, and hyperparameters can be fit reliably.
A data set which exhibits the superior predictive modelling of the TGP in a domain where robust
methods can also expect to perform well is provided by Kuss [7] in a variation on a set of Friedman
[9]. The samples are drawn from a function of ten-dimensional vectors x which depend only on the
first five components:
f (x) = 10 sin(?x1 x2 ) + 20(x3 ? 0.5)2 + 10x4 + 5x5 .
6
-10
0
(a) Mixture noise
10
-10
0
(b) Laplace noise
10
-10
0
(c) TGP
10
Figure 4: The corruptions are i.i.d. around x = ?10, and highly correlated near x = 0.
We generated ten sets of 90 training examples and 10000 test examples by sampling x uniformly
in [0, 1]10 , and adding to the training data noise N (0, 1). In our first experiment, we replicated the
procedure of [7]: ten training points were added at random with outputs sampled from N (15, 9) (a
value likely to lie in the same range as f ). The results appear as Friedman (1) in figure 5. Observe
that the r.m.s. error for the robust methods is similar, but the TGP is able to fit the variance far more
accurately. In a second experiment, the training set was augmented with two Gaussian clusters each
of five noisy observations. The cluster centres were drawn uniformly in [0, 1]10 , with variance fixed
at 10?3 . Output values were then drawn from N (0, 1) for all ten points, to give highly correlated
values distant from the underlying function (Friedman (2)). Now the TGP excels where the other
methods offer no improvement on the standard GP; it also yields very confident predictions (cf.
Friedman (1)), because once the outliers have been accounted for there are fewer corrupted regions;
furthermore, estimates of where the data are corrupted can be recovered by considering the process
on u. In both experiments, the training data were renormalized to zero mean and unit variance, and
throughout, we used the anisotropic squared exponential for the f process (implementing so-called
relevance determination), and an isotropic version for u. The approximate marginal likelihood was
maximized on three to five randomly initialized models; we chose for testing the most favoured.
The second domain of application is when the noise on the data is believed a priori to be a function of
the input (i.e. heteroscedastic). The twinned GP can simulate this changing variance by modulating
the u process, allocating varying weight to the two components. By way of example, the behaviour
for the one-dimensional motorcycle set [10] is shown in fig. 5c. However, since the input-dependent
noise is not modelled directly, there are two notable dangers associated with this approach: first,
the predictive variance saturates when all weight has been apportioned to one or other component;
second, the ?outlier? component can dominate the variance estimates of the mixture. This is particularly problematic when variance on the data ranges over several orders of magnitude, such that the
?outlier? width must be comparably broader than that of the ?real? component. In such cases, only
with extreme values of u can the smallest errors be predicted, but in consequence the process tends
to sweep precipitately through the region of sensitivity where variance predictions can be made accurately. To circumvent these problems we might employ the warped GP [11] to rescale the process
on u in a supervised manner, but we do not explore these ideas further here.
0.5
0.6
0.4
0
0.4
0
0.2
0.2
GP Lap MixTGP
test error
GP Lap MixTGP
neg. log probability
(a) Friedman (1)
-1
GP Lap MixTGP
test error
GP Lap MixTGP
neg. log probability
(b) Friedman (2)
(c) Motorcycle
Figure 5: Results for the Friedman data, and the predictions of the TGP on the motorcycle set.
7
5 Extensions
With prior knowledge of the nature of corruptions affecting the signal, we can seek to model the
noise distribution more accurately,
for example by introducing
P a compound likelihood for the outlier
P
2
component pO (yn |fn ) =
?
N
y
;
?
(f
)
,
?
j
n
j
n
j ,
j
j ?j = 1. This constrains the relative
weight of outlier corruptions to be constant across the entire domain. A richer alternative is provided
by extending the single u-process on noise to a series u(1) , u(2) , . . . , u(?) of noise processes, and
broadening the likelihood function appropriately. For example, with ? = 2, we may write
(2)
(1)
2
p(yn |fn , u(1)
n , un ) = ?(un )N yn ; fn , ?R +
(2)
2
?(?u(1)
n )?(un )N yn ; fn , ?O1 +
(2)
2
?(?u(1)
n )?(?un )N yn ; f0 , ?O2 . (4)
In the former case, the preceding analysis applies with small changes: each component of the outlier
distribution contributes moments independently. The second model introduces significant computational difficulty: firstly, we must maintain a posterior distribution over f and all ? us, yielding
space requirements O(N (? + 1)) and time complexity O(N 3 (? + 1)3 ). More importantly, the requisite moments needed in the EP loop are now intractable, although an inner EP loop can be used
to approximate them, since the product of ?s behaves in essence like the standard model for GP
classification. We omit details, and defer experiments with such a model to future work.
6 Conclusions
We have presented a method for robust GP regression that improves upon classical approaches by
allowing the noise variance to vary in the input space. We found improved convergence on problems
which upset the standard mixture model, and have shown how predictive certainty can be improved
by adopting the TGP even for problems which do not. The model also allows an arbitrary process
on u, such that specialized prior knowledge could be used to drive the inference over f to respecting
regions which may otherwise be considered erroneous. A generalization of our ideas appears as
the mixture of GPs [1], and the infinite mixture [2], but both involve a slow inference procedure.
When faster solutions are required for robust inference, and a two-component mixture is an adequate
model for the task, we believe the TGP is a very attractive option.
References
[1] Volker Tresp. Mixtures of Gaussian processes. In Advances in Neural Information Processing Systems,
pages 654?660, 2000.
[2] Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of gaussian process experts. In
Advances in Neural Information Processing Systems, 2002.
[3] Paul Goldberg, Christopher Williams, and Christopher Bishop. Regression with input-dependent noise:
a Gaussian process treatment. In Advances in Neural Information Processing Systems. MIT Press, 1998.
[4] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances
in Neural Information Processing Systems 18. MIT Press, 2005.
[5] Thomas Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts
Institute of Technology, 2001.
[6] Carl Rasmussen and Christopher Williams. Gaussian processes for machine learning. MIT Press, 2006.
[7] Malte Kuss. Gaussian process models for robust regression, classification and reinforcement learning.
PhD thesis, Technische Universit?at Darmstadt, 2006.
[8] Matthias Seeger.
Expectation propagation for exponential families, 2005.
Available from
http://www.cs.berkeley.edu/?mseeger/papers/epexpfam.ps.gz.
[9] J. H. Friedman. Multivariate adaptive regression splines. Annals of Statistics, 19(1):1?67, 1991.
[10] B.W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve
fitting. Journal of the Royal Statistical Society B, 47:1?52, 1985.
[11] Edward Snelson, Carl Edward Rasmussen, and Zoubin Ghahramani. Warped Gaussian processes. In
Advances in Neural Information Processing Systems 16, 2003.
8
| 3346 |@word version:1 inversion:1 proportion:1 grey:1 seek:2 crucially:1 covariance:5 accommodate:1 moment:10 series:2 united:1 mseeger:1 o2:1 current:1 recovered:2 must:3 refresh:2 tilted:3 distant:1 fn:23 remove:1 update:6 generative:1 fewer:1 parameterization:1 isotropic:1 provides:3 firstly:1 simpler:1 five:4 burst:1 c2:1 overhead:1 fitting:3 underfitting:1 multimodality:1 manner:1 mask:1 indeed:1 automatically:1 encouraging:1 inappropriate:2 equipped:1 considering:1 provided:5 begin:1 underlying:2 mitigated:1 panel:5 mass:3 bounded:1 what:1 differentiation:1 impractical:1 guarantee:1 certainty:1 berkeley:1 thorough:1 y3:2 every:1 act:1 concave:1 pseudo:1 universit:1 scaled:1 uk:1 normally:2 unit:2 omit:1 yn:19 appear:4 before:1 positive:1 understood:1 resilience:1 tends:1 treat:1 limit:1 consequence:1 approximately:1 black:3 zeroth:2 exert:1 chose:1 might:1 deceptive:1 conversely:1 shaded:2 heteroscedastic:2 collapse:1 factorization:1 range:2 averaged:1 ktf:2 hood:1 testing:2 definite:1 x3:3 prevalence:1 silverman:1 procedure:5 danger:1 dudf:3 matching:1 confidence:4 suggest:2 zoubin:3 cannot:1 onto:1 selection:2 marginalize:1 context:2 influence:2 writing:1 restriction:1 conventional:1 equivalent:1 www:1 dz:2 williams:2 independently:2 twocomponent:1 rule:1 importantly:1 dominate:1 pull:1 stability:1 variation:2 justification:1 laplace:4 limiting:1 annals:1 suppose:1 exact:1 gps:1 goldberg:3 us:1 carl:3 particularly:2 updating:1 ep:24 cloud:1 observed:1 region:7 connected:1 cycle:1 apportioned:1 mentioned:1 mu:3 complexity:4 constrains:1 respecting:1 ideally:1 interpolant:3 cam:1 renormalized:1 depend:1 exposed:1 predictive:6 upon:1 division:1 f2:2 efficiency:1 basis:1 easily:2 joint:4 po:1 zo:2 effective:1 monte:2 whose:3 quite:2 larger:1 valued:1 richer:1 say:1 otherwise:1 ability:1 statistic:1 gp:27 noisy:4 itself:2 final:1 advantage:1 matthias:1 propose:1 product:8 causing:1 loop:6 motorcycle:3 ramification:1 mixing:1 intuitive:1 convergence:6 cluster:5 double:1 optimum:1 extending:1 requirement:1 p:1 illustrate:1 andrew:1 ac:1 rescale:1 strong:1 dividing:1 auxiliary:1 predicted:1 c:1 indicate:1 uu:1 edward:4 solves:1 thick:1 correct:1 human:1 implementing:1 argued:1 require:3 behaviour:4 resilient:1 darmstadt:1 f1:2 generalization:2 clustered:3 extension:1 lying:1 around:3 sufficiently:1 considered:1 algorithmic:1 u3:1 vary:1 smallest:2 homoscedastic:1 heavytailed:1 saw:1 modulating:1 establishes:1 mit:3 gaussian:32 rather:2 avoid:1 factorizes:2 varying:1 broader:2 probabilistically:1 volker:1 noisiness:1 usu:1 improvement:1 modelling:2 likelihood:23 rank:2 greatly:1 seeger:2 sense:1 posteriori:1 inference:14 dependent:3 holden:1 entire:3 typically:1 hidden:1 relegated:1 interested:2 classification:3 augment:1 tgp:16 priori:1 smoothing:1 marginal:9 equal:1 construct:1 f3:2 once:1 sampling:3 zz:2 identical:1 eliminated:1 broad:1 x4:1 peaked:1 future:1 guzman:1 spline:2 contaminated:1 employ:1 primarily:1 randomly:1 simultaneously:1 tightly:2 hamper:1 preserve:1 divergence:1 ourselves:1 replacement:1 maintain:1 friedman:8 interest:1 fd:1 possibility:2 highly:4 evaluation:1 severe:1 replicates:1 weakness:1 introduces:2 mixture:22 extreme:2 yielding:1 light:3 allocating:1 integral:1 explosion:1 necessary:1 partial:1 logarithm:1 initialized:1 isolated:1 theoretical:1 witnessed:1 column:4 retains:2 zn:4 maximization:1 cost:1 tractability:2 technische:1 deviation:1 introducing:1 kq:2 wonder:1 answer:1 corrupted:3 confident:2 fundamental:1 peak:2 sensitivity:1 systematic:1 together:1 thesis:2 again:2 central:2 reflect:1 heaviest:1 choose:2 possibly:1 squared:1 warped:2 derivative:1 expert:1 return:1 account:1 student:1 bold:1 coefficient:1 notable:1 caused:1 performed:1 observing:1 recover:2 bayes:1 option:1 defer:1 formed:1 variance:15 largely:1 efficiently:1 characteristic:1 yield:2 identify:1 maximized:1 modelled:3 bayesian:2 accurately:3 comparably:2 carlo:2 liable:1 drive:1 corruption:6 kuss:3 explain:1 implausible:1 suffers:1 against:1 grossly:1 minka:1 associated:3 sampled:1 adjusting:1 treatment:1 massachusetts:1 recall:1 knowledge:3 infers:1 improves:1 sean:1 reflecting:1 appears:3 supervised:1 improved:3 box:3 strongly:1 furthermore:2 stage:1 correlation:4 hand:1 hastings:1 ally:1 christopher:3 propagation:3 resemblance:1 believe:1 y2:2 remedy:1 true:2 adequately:1 vicinity:1 hence:1 analytically:1 former:1 laboratory:1 satisfactory:1 illustrated:1 attractive:1 sin:1 during:1 x5:1 nuisance:1 width:1 essence:1 unnormalized:2 manifestation:1 generalized:1 invisible:1 tn:3 snelson:2 common:1 superior:1 specialized:1 behaves:1 quarter:1 dubious:1 functional:1 conditioning:1 anisotropic:1 tail:8 interpretation:2 measurement:1 refer:1 significant:1 cambridge:2 gibbs:2 smoothness:1 unconstrained:1 resorting:1 centre:1 dot:1 stable:1 f0:1 similarity:1 encountering:1 longer:1 pkq:1 posterior:24 closest:1 multivariate:1 belongs:1 forcing:1 compound:1 neg:2 minimum:1 additional:2 preceding:1 signal:6 semi:1 full:2 unimodal:1 mix:1 infer:1 faster:2 match:2 determination:1 cross:1 offer:1 believed:1 prediction:8 regression:10 essentially:1 expectation:3 df:2 iteration:3 kernel:1 adopting:1 achieved:3 affecting:2 addition:3 whereas:1 interval:2 diagram:1 source:1 appropriately:2 ascent:1 spirit:1 odds:1 call:1 near:1 presence:1 naish:1 identically:1 xj:1 fit:3 affect:1 architecture:1 competing:1 inner:2 idea:2 br:2 favour:1 t0:3 specialization:2 motivated:1 heavier:1 duty:1 assist:1 render:1 cause:1 adequate:1 involve:1 bcr:1 ten:5 augments:1 simplest:1 reduced:1 generate:1 specifies:1 http:1 problematic:1 broadly:1 write:1 hyperparameter:1 affected:1 epexpfam:1 upset:1 four:2 twinned:8 drawn:4 cb3:1 changing:1 fraction:1 sum:1 run:1 inverse:1 uncertainty:1 family:3 reader:1 decide:1 throughout:1 comparable:1 entirely:1 sharply:1 x2:3 dominated:1 u1:1 simulate:1 aspect:1 min:1 uf:1 influential:1 remain:1 across:2 describes:3 beneficial:1 slightly:1 metropolis:1 making:3 outlier:22 taken:1 visualization:1 skew:1 fail:2 needed:1 tractable:3 end:1 generalizes:1 gaussians:1 available:1 apply:1 observe:2 away:1 occurrence:1 alternative:1 robustness:2 altogether:1 slower:2 thomas:1 unreasonably:1 remaining:1 include:1 top:1 cf:1 graphical:3 maintaining:1 calculating:1 ghahramani:5 establish:1 approximating:1 classical:2 comparatively:1 society:1 sweep:1 added:1 quantity:1 spike:1 parametric:1 traditional:1 exhibit:2 gradient:2 excels:1 majority:1 sensible:1 argue:1 o1:1 pointwise:1 retained:1 kingdom:1 unfortunately:1 potentially:1 negative:1 implementation:1 reliably:1 packed:1 perform:1 allowing:1 upper:1 observation:17 benchmark:1 finite:2 extended:1 ever:1 saturates:1 y1:2 perturbation:1 omission:1 arbitrary:1 inferred:1 ordinate:1 introduced:1 pair:1 required:6 kl:2 conflicting:1 hug:1 alternately:1 address:2 able:1 suggested:2 bar:1 usually:1 challenge:1 reliable:1 royal:1 explanation:1 belief:3 suitable:1 malte:1 treated:1 force:1 natural:4 circumvent:1 difficulty:1 zr:2 tives:1 technology:1 brief:1 gz:1 tresp:3 prior:16 kf:9 marginalizing:2 asymptotic:1 relative:1 fully:1 expect:2 bear:1 loss:1 principle:1 heavy:9 elsewhere:1 accounted:1 rasmussen:5 enjoys:1 allow:3 understand:1 institute:1 fall:1 absolute:1 fifth:1 sparse:1 benefit:1 distributed:3 curve:1 xn:4 world:1 gram:1 contour:1 made:3 reinforcement:1 universally:1 simplified:1 replicated:1 outlying:2 far:1 adaptive:1 approximate:8 ignore:1 cavity:1 global:5 xi:1 un:18 latent:6 iterative:1 tailed:8 additionally:1 ku:2 nature:1 robust:13 symmetry:1 contributes:2 du:1 broadening:2 cl:1 necessarily:1 domain:7 dense:1 noise:37 hyperparameters:3 paul:1 repeated:3 x1:3 body:1 site:8 augmented:1 fig:1 ff:2 slow:2 precision:2 fails:1 favoured:1 originated:1 exponential:5 lie:2 third:1 theorem:1 erroneous:2 bishop:1 gating:3 admits:1 sinc:1 evidence:1 intractable:4 restricting:1 adding:2 effectively:1 gained:1 importance:1 phd:2 magnitude:2 illustrates:2 demand:1 suited:1 smoothly:1 lap:4 likely:2 explore:1 adjustment:1 bo:1 u2:1 impedes:1 applies:1 minimizer:1 viewed:1 considerable:1 change:2 typical:1 determined:1 uniformly:2 infinite:2 admittedly:1 called:2 total:1 zzzz:1 succeeds:1 rarely:1 select:1 mark:1 cholesky:2 latter:2 relevance:1 incorporate:1 requisite:1 correlated:2 |
2,588 | 3,347 | Computing Robust Counter-Strategies
Michael Johanson
johanson@cs.ualberta.ca
Martin Zinkevich
maz@cs.ualberta.ca
Michael Bowling
Computing Science Department
University of Alberta
Edmonton, AB Canada T6G2E8
bowling@cs.ualberta.ca
Abstract
Adaptation to other initially unknown agents often requires computing an effective counter-strategy. In the Bayesian paradigm, one must find a good counterstrategy to the inferred posterior of the other agents? behavior. In the experts
paradigm, one may want to choose experts that are good counter-strategies to
the other agents? expected behavior. In this paper we introduce a technique for
computing robust counter-strategies for adaptation in multiagent scenarios under
a variety of paradigms. The strategies can take advantage of a suspected tendency
in the decisions of the other agents, while bounding the worst-case performance
when the tendency is not observed. The technique involves solving a modified
game, and therefore can make use of recently developed algorithms for solving
very large extensive games. We demonstrate the effectiveness of the technique in
two-player Texas Hold?em. We show that the computed poker strategies are substantially more robust than best response counter-strategies, while still exploiting
a suspected tendency. We also compose the generated strategies in an experts algorithm showing a dramatic improvement in performance over using simple best
responses.
1
Introduction
Many applications for autonomous decision making (e.g., assistive technologies, electronic commerce, interactive entertainment) involve other agents interacting in the same environment. The
agents? choices are often not independent, and good performance may necessitate adapting to the
behavior of the other agents. A number of paradigms have been proposed for adaptive decision
making in multiagent scenarios. The agent modeling paradigm proposes to learn a predictive model
of other agents? behavior from observations of their decisions. The model is then used to compute
or select a counter-strategy that will perform well given the model. An alternative paradigm is the
mixture of experts. In this approach, a set of expert strategies is identified a priori. These experts
can be thought of as counter-strategies for the range of expected tendencies in the other agents?
behavior. The decision maker, then, chooses amongst the counter-strategies based on their online
performance, commonly using techniques for regret minimization (e.g., UCB1 [ACBF02]). In either
approach, finding counter-strategies is an important subcomponent.
The most common approach to choosing a counter-strategy is best response: the performance maximizing strategy if the other agents? behavior is known [Rob51, CM96]. In large domains where best
response computations are not tractable, they are often approximated with ?good responses? from a
computationally tractable set, where performance maximization remains the only criterion [RV02].
The problem with this approach is that best response strategies can be very brittle. While max1
imizing performance against the model, they can (and often do) perform poorly when the model
is wrong. The use of best response counter-strategies, therefore, puts an impossible burden on a
priori choices, either the agent model bias or the set of expert counter-strategies. McCracken and
Bowling [MB04] proposed -safe strategies to address this issue. Their technique chooses the best
performance maximizing strategy from the set of strategies that don?t lose more than in the worstcase. The strategy balances exploiting the agent model with a safety guarantee in case the model is
wrong. Although conceptually appealing, it is computationally infeasible even for moderately sized
domains and has only been employed in the simple game of Ro-Sham-Bo.
In this paper, we introduce a new technique for computing robust counter-strategies. The counterstrategies, called restricted Nash responses, balance performance maximization against the model
with reasonable performance even when the model is wrong. The technique involves computing a
Nash equilibrium of a modified game, and therefore can exploit recent advances in solving large
extensive games [GHPS07, ZBB07, ZJBP08]. We demonstrate the practicality of the approach in
the challenging domain of poker. We begin by reviewing the concepts of extensive form games,
best responses, and Nash equilibria, as well as describing how these concepts apply in the poker
domain. We then describe a technique for computing an approximate best response to an arbitrary
poker strategy, and show that this, indeed, produces brittle counter-strategies. We then introduce
restricted Nash responses, describe how they can be computed efficiently, and show that they are
significantly more robust while still being effective counter-strategies. Finally, we demonstrate that
these strategies can be used in an experts algorithm to make a more effective adaptive player than
when using simple best response.
2
Background
A perfect information extensive game consists of a tree of game states. At each game state, an
action is made either by nature, or by one of the players, or the state is a terminal state where each
player receives a fixed utility. A strategy for a player consists of a distribution over actions for every
game state. In an imperfect information extensive game, the states where a player makes an action
are divided into information sets. When a player chooses an action, it does not know the state of
the game, only the information set, and therefore its strategy is a mapping from information sets
to distributions over actions. A common restriction on imperfect information extensive games is
perfect recall, where two states can only be in the same information set for a player if that player
took the same actions from the same information sets to reach the two game states. In the remainder
of the paper, we will be considering imperfect information extensive games with perfect recall.
Let ?i be a strategy for player i where ?i (I, a) is the probability that strategy assigns to action a in
information set I. Let ?i be the set of strategies for player i, and define ui (?1 , ?2 ) to be the expected
utility of player i if player 1 uses ?1 ? ?1 and player 2 uses ?2 ? ?2 . Define BR(?2 ) ? ?1 to be
the set of best responses to ?2 , i.e.:
BR(?2 ) = argmax u1 (?1 , ?2 )
(1)
?1 ??1
and define BR(?1 ) ? ?2 similarly. If ?1 ? BR(?2 ) and ?2 ? BR(?1 ), then (?1 , ?2 ) is a Nash
equilibrium. A zero-sum extensive game is an extensive game where u1 = ?u2 . In this type of
game, for any two equilibria (?1 , ?2 ) and (?10 , ?20 ), u1 (?1 , ?2 ) = u1 (?10 , ?20 ) and (?1 , ?20 ) (as well as
(?10 , ?2 )) are also equilibria. Define the value of the game to player 1 (v1 ) to be the expected utility
of player 1 in equilibrium. In a zero-sum extensive game, the exploitability of a strategy ?1 ? ?1
is:
ex(?1 ) = max (v1 ? u1 (?1 , ?2 )).
(2)
?2 ??2
The value of the game to player 2 (v2 ) and the exploitability of a strategy ?2 ? ?2 are defined
similarly. A strategy which can be exploited for no more than is -safe. An -Nash equilibrium
in a zero-sum extensive game is a strategy pair where both strategies are -safe.
In the remainder of the work, we will be dealing with mixing two strategies. Informally, one can
think of mixing two strategies as performing the following operation: first, flip a (possibly biased)
coin; if it comes up heads, use the first strategy, otherwise use the second strategy. Formally, define
? ?i (I) to be the probability that player i when following strategy ?i chooses the actions necessary to
2
make information set I reachable from the root of the game tree. Given ?1 , ?10 ? ?1 and p ? [0, 1],
define mixp (?1 , ?10 ) ? ?1 such that for any information set I of player 1, for all actions a:
0
mixp (?1 , ?10 )(I, a) =
p ? ? ?1 (I)?1 (I, a) + (1 ? p) ? ? ?1 (I)?1 (I, a)
.
0
p ? ? ?1 (I) + (1 ? p) ? ? ?1 (I)
(3)
Given an event E, define Pr?1 ,?2 [E] to be the probability of the event E given player 1 uses ?1 ,
and player 2 uses ?2 . Given the above definition of mix, it is the case that for all ?1 , ?10 ? ?1 , all
?2 ? ?2 , all p ? [0, 1], and all events E:
Pr
mixp (?1 ,?10 ),?2
[E]
[E] = p Pr [E] + (1 ? p) Pr
0
?1 ,?2
?1 ,?2
(4)
So probabilities of outcomes can simply be combined linearly. As a result the utility of a mixture of
strategies is just u(mixp (?1 , ?10 ), ?2 ) = pu(?1 , ?2 ) + (1 ? p)u(?10 , ?2 ).
3
Texas Hold?Em
While the techniques in this paper apply to general extensive games, our empirical results will focus
on the domain of poker. In particular, we look at heads-up limit Texas Hold?em, the game used in
the AAAI Computer Poker Competition [ZL06]. A single hand of this poker variant consists of two
players each being dealt two private cards, followed by five community cards being revealed. Each
player tries to form the best five-card poker hand from the community cards and her private cards:
if the hand goes to a showdown, the player with the best five-card hand wins the pot. The key to
good play is on average to have more chips in the pot when you win than are in the pot when you
lose. The players? actions control the pot size through betting. After the private cards are dealt, a
round of betting occurs, followed by additional betting rounds after the third (flop), fourth (turn),
and fifth (river) community cards are revealed. Betting rounds involve players alternately deciding
to either fold (letting the other player win the chips in the pot), call (matching the opponent?s chips
in the pot), or raise (matching, and then adding an additional fixed amount into the pot). No more
than four raises are allowed in a single betting round. Notice that heads-up limit Texas Hold?em is
an example of a finite imperfect information extensive game with perfect recall. When evaluating
the results of a match (several hands of poker) between two players, we find it convenient to state
the result in millibets won per hand. A millibet is one thousandth of a small-bet, the fixed magnitude
of bets used in the first two rounds of betting. To provide some intuition for these numbers, a player
that always folds will lose 750 mb/h while a typical player that is 10 mb/h stronger than another
would require over one million hands to be 95% certain to have won overall.
Abstraction. While being a relatively small variant of poker, the game tree for heads-up limit
Texas Hold?em is still very large, having approximately 9.17 ? 1017 states. Fundamental operations,
such as computing a best response strategy or a Nash equilibrium as described in Section 2, are
intractable on the full game. Common practice is to define a more reasonably sized abstraction by
merging information sets (e.g., by treating certain hands as indistinguishable). If the abstraction
involves the same betting structure, a strategy for an abstract game can be played directly in the full
game. If the abstraction is small enough Nash equilibria and best response computations become
feasible. Finding an approximate Nash equilibrium in an abstract game has proven to be an effective
way to construct a strong program for the full game [BBD+ 03, GS06]. Recent solution techniques
have been able to compute approximate Nash equilibria for abstractions with as many as 1010 game
states [ZBB07, GHPS07]. Given a strategy defined in a small enough abstraction, it is also possible
to compute a best response to the strategy in the abstract game. This can be done in time linear in the
size of the extensive game. The abstraction used in this paper has approximately 6.45 ? 109 game
states, and is described in an accompanying technical report [JZB07].
The Competitors. Since this work focuses on adapting to other agents? behavior, our experiments
make use of a battery of different poker playing programs. We give a brief description of these
programs here. PsOpti4 [BBD+ 03] is one of the earliest successful near equilibrium programs
for poker and is available as ?Sparbot? in the commercial title Poker Academy. PsOpti6 is a later
and weaker variant, but whose weaknesses are thought to be less obvious to human players. Together, PsOpti4 and PsOpti6 formed Hyperborean, the winner of the AAAI 2006 Computer Poker
Competition. S1239, S1399, and S2298 are similar near equilibrium strategies generated by a new
3
equilibrium computation method [ZBB07] using a much larger abstraction than is used in PsOpti4
and PsOpti6. A60 and A80 are two past failed attempts at generating interesting exploitive strategies,
and are highly exploitable for over 1000 mb/h. CFR5 is a new near Nash equilibrium [ZJBP08],
and uses the abstraction described in the accompanying technical report [JZB07]. We will also experiment with two programs Bluffbot and Monash, who placed second and third respectively in the
AAAI 2006 Computer Poker Competition?s bankroll event [ZL06].
4
Frequentist Best Response
In the introduction, we described best response counter-strategies as brittle, performing poorly when
playing against a different strategy from the one which they were computed to exploit. In this section, we examine this claim empirically in the domain of poker. Since a best response computation
is intractable in the full game, we first describe a technique, called frequentist best response, for
finding a ?good response? using an abstract game. As described in the previous section, given a
strategy in an abstract game we can compute a best response to that strategy within the abstraction.
The challenge is that the abstraction used by an arbitrary opponent is not known. In addition, it may
be beneficial to find a best response in an alternative, possible more powerful, abstraction.
Suppose we want to find a ?good response? to some strategy P. The basic idea of frequentist best
response (FBR) is to observe P playing the full game of poker, construct a model of it in an abstract
game (unrelated to that P?s own abstraction), and then compute a best-response in this abstraction.
FBR first needs many examples of the strategy playing the full, unabstracted game. It then iterates
through every one of P?s actions for every hand. It finds the action?s associated information set in
the abstract game and increments a counter associated with that information set and action. After
observing a sufficient number of hands, we can construct a strategy in the abstract game based on
the frequency counts. At each information set, we set the strategy?s probability for performing each
action to be the number of observations of that action being chosen from that information set, divided
by the total number of observations in the information set. If an information set was never observed,
the strategy defaults to the call action. Since this strategy is defined in a known abstraction, FBR
can simply calculate a best response to this frequentist strategy.
P?s opponent in the observed games greatly affects the quality of the model. We have found it
most effective to have P play against a trivial strategy that calls and raises with equal probability.
This provides with us the most observations of P?s decisions that are well distributed throughout
the possible betting sequences. Observing P in self-play or against near equilibrium strategies has
shown to require considerably more observed hands. We typically use 5 million hands of training
data to compute the model strategy, although reasonable responses can still be computed with as few
as 1 million hands.
Evaluation. We computed frequentist best response strategies against seven different opponents.
We played the resulting responses both against the opponent it was designed to exploit as well as the
other six opponents and an approximate equilibrium strategy computed using the same abstraction.
The results of this tournament are shown as a crosstable in Table 1. Positive numbers (appearing
with a green background) are in favor of the row player (FBR strategies, in this case).
The first thing to notice is that FBR is very successful at exploiting the opponent it was designed to
exploit, i.e., the diagonal of the crosstable is positive and often large. In some cases, FBR identified
strategies exploiting the opponent for more than previously known to be possible, e.g., PsOpti4 had
only previously been exploited for 75 mb/h [Sch06], while FBR exploits it for 137 mb/h. The second
thing to notice is that when FBR strategies play against other opponents their performance is poor,
i.e., the off-diagonal of the crosstable is generally negative and occasionally by a large amount. For
example, A60 is not a strong program. It is exploitable for over 2000 mb/h (note that always fold
only loses 750 mb/h) and an approximate equilibrium strategy defeats it by 93 mb/h. Yet, every FBR
strategy besides the one trained on it, loses to it, sometimes by a substantial amount. These results
give evidence that best response is, in practice, a brittle computation, and can perform poorly when
the model is wrong.
One exception to this trend is play within the family of S-bots. In particular, consider S1399 and
S1239, which are very similar programs, using the same technique for equilibrium computation with
the same abstract game. They only differ in the number of iterations the algorithm was afforded. The
4
FBR-PsOpti4
FBR-PsOpti6
FBR-A60
FBR-A80
FBR-S1239
FBR-S1399
FBR-S2298
CFR5
Max
PsOpti4
137
-79
-442
-312
-20
-43
-39
36
137
PsOpti6
-163
330
-499
-281
105
38
51
123
330
A60
-227
-68
2170
-557
-89
-48
-50
93
2170
Opponents
A80 S1239
-231
-106
-89
-36
-701
-359
1048
-251
-42
106
-77
75
-26
42
41
70
1048
106
S1399
-85
-23
-305
-231
91
118
50
68
118
S2298
-144
-48
-377
-266
-32
-46
33
17
33
CFR5
-210
-97
-620
-331
-87
-109
-41
0
0
Average
-129
-14
-142
-148
3
-11
2
56
Table 1: Results of frequentist best responses (FBR) against a variety of opponent programs in full
Texas Hold?em, with winnings in mb/h for the row player. Results involving PsOpti4 or PsOpti6
used 10 duplicate matches of 10,000 hands and are significant to 20 mb/h. Other results used 10
duplicate matches of 500,000 hands and are significant to 2 mb/h.
results show they do share weaknesses as FBR-S1399 does beat S1239 by 75 mb/h. However, this
is 30% less than 106 mb/h, the amount that FBR-S1239 beats the same opponent. Considering the
similarity of these opponents, even this apparent exception is actually suggestive that best response
is not robust to even slight changes in the model.
Finally, consider the performance of the approximate equilibrium player, CFR5. As it was computed
from a relatively large abstraction it performs comparably well, not losing to any of the seven opponents. However, it also does not win by the margins of the correct FBR strategy. As noted, against
the highly exploitable A60, it wins by a mere 93 mb/h. What we really want is a compromise.
We would like a strategy that can exploit an opponent successfully like FBR, but without the large
penalty when playing against a different opponent. The remainder of the paper examines Restricted
Nash Response, a technique for creating such strategies.
5
Restricted Nash Response
Imagine that you had a model of your opponent, but did not believe that this model was perfect.
The model may capture the general idea of the adversary you expect to face, but most likely is not
identical. For example, maybe you have played a previous version of the same program, have a
model of its play, but suspect that the designer is likely to have made some small improvements in
the new version. One way to explicitly define our situation is that with the new version we might
expect that 75 percent of the hands will be played identically to the old version. The other 25 percent
is some new modification, for which we want to be robust. This, in itself, can be thought of as a
game for which we can apply the usual game theoretic machinery of equilibria.
fix
to be those strategies of
Let our model of our opponent be some strategy ?fix ? ?2 . Define ?p,?
2
0
0
the form mixp (?fix , ?2 ), where ?2 is an arbitrary strategy in ?2 . Define the set of restricted best
responses to ?1 ? ?1 to be:
BRp,?fix (?1 ) = argmax u2 (?1 , ?2 )
(5)
p,?fix
?2 ??2
A (p, ?fix ) restricted Nash equilibrium is a pair of strategies (?1? , ?2? ) where ?2? ? BRp,?fix (?1? )
and ?1? ? BR(?2? ). In this pair, the strategy ?1? is a p-restricted Nash response (RNR) to ?fix . We
propose these RNRs would be ideal counter-strategies for ?fix , where p provides a balance between
exploitation and exploitability. This concept is closely related to -safe best responses [MB04].
Define ?-safe
? ?1 to be the set of all strategies which are -safe (with an exploitability less than
1
). Then the set of -safe best responses are:
BR-safe (?2 ) = argmax u1 (?1 , ?2 )
(6)
?1 ??-safe
Theorem 1 For all ?2 ? ?2 , for all p ? (0, 1], if ?1 is a p-RNR to ?2 , then there exists an such
that ?1 is an -safe best response to ?2 .
5
260
1100
240
(0.99)
900
(0.95)
200
Exploitation (mb/h)
Exploitation (mb/h)
220
1000
(1.00)
(0.90)
180
(0.85)
(0.82)
(0.75)
160
140 (0.50)
120
100
(0.00)
80
0
1000
2000
3000 4000 5000
Exploitability (mb/h)
6000
7000
8000
(a) Versus PsOpti4
800
(0.95)
(0.90)
(1.00)
(0.80)
700
600
500
(0.60)
(0.55)
(0.50)
300 (0.45)
(0.40)
200
(0.25)
100
(0.00)
0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
Exploitability (mb/h)
400
(b) Versus A80
Figure 1: The tradeoff between and utility. For each opponent, we varied p ? [0, 1] for the RNR.
The labels at each datapoint indicate the value of p used.
The proof of Theorem 1 is in an accompanying technical report [JZB07]. The significance of Theorem 1 is that, among all strategies that are at most suboptimal, the RNR strategies are among the
best responses. Thus, if we want a strategy that is at most suboptimal, we can vary p to produce a
strategy that is the best response among all such -safe strategies.
Unlike safe best responses, a RNR can be computed by just solving a modification of the original
abstract game. For example, if using a sequence form representation of linear programming then
one just needs to add lower bound constraints for the restricted player?s realization plan probabilities. In our experiments we use a recently developed solution technique based on regret minimization [ZJBP08] with a modified game that starts with an unobserved chance node deciding whether
the restricted player is forced to use strategy ?fix on the current hand. The RNRs used in our experiments were computed with less than a day of computation on a 2.4Ghz AMD Opteron.
Choosing p. In order to compute a RNR we have to choose a value of p. By varying the value
p ? [0, 1], we can produce poker strategies that are closer to a Nash equilibrium (when p is near 0) or
are closer to the best response (when p is near 1). When producing an RNR to a particular opponent,
it is useful to consider the tradeoff between the utility of the response against that opponent and
the exploitability of the response itself. We explore this tradeoff in Figure 1. In 1a we plot the
results of using RNR with various values of p against the model of PsOpti4. The x-axis shows the
exploitability of the response, . The y-axis shows the exploitation of the model by the response
in the abstract game. Note that the actual exploitation and exploitability in the full game may be
different, as we explore later. Figure 1b shows this tradeoff against A80.
Notice that by selecting values of p, we can control the tradeoff between and the response?s exploitation of the strategy. More importantly, the curves are highly concave meaning that dramatic
reductions in exploitability can be achieved with only a small sacrifice in the ability to exploit the
model.
Evaluation. We used RNR to compute a counter-strategy to the same seven opponents used in the
FBR experiments, with the p value used for each opponent selected such that the resulting is close
to 100 mb/h. The RNR strategies were played against these seven opponents and the equilibrium
CFR5 in the full game of Texas Hold?em. The results of this tournament are displayed as a crosstable
in Table 2.
The first thing to notice is that RNR is capable of exploiting the opponent for which it was designed
as a counter-strategy, while still performing well against the other opponents. In other words, not
only is the diagonal positive and large, most of the crosstable is positive. For the highly exploitable
opponents, such as A60 and A80, the degree of exploitation is much reduced from FBR, which is a
consequence of choosing p such that is at most 100 mb/h. Notice, though, that it does exploit these
opponents significantly more than the approximate Nash strategy (CFR5).
6
RNR-PsOpti4
RNR-PsOpti6
RNR-A60
RNR-A80
RNR-S1239
RNR-S1399
RNR-S2298
CFR5
Max
PsOpti4
85
26
-17
-7
38
31
21
36
85
PsOpti6
112
234
63
66
130
136
137
123
234
A60
39
72
582
22
68
66
72
93
582
Opponents
A80 S1239
9
63
34
59
-22
37
293
11
31
111
29
105
30
77
41
70
293
111
S1399
61
59
39
12
106
112
76
68
112
S2298
-1
1
-9
0
9
6
31
17
31
CFR5
-23
-28
-45
-29
-20
-24
-11
0
0
Average
43
57
78
46
59
58
54
56
Table 2: Results of restriced Nash response (RNR) against a variety of opponent programs in full
Texas Hold?em, with winnings in mb/h for the row player. See the caption of Table 1 for match
details.
800
Performance (mb/h)
700
600
FBR Experts
RNR Experts
5555hs2
500
400
300
200
100
0
-100
Opti4
Attack80
S1399
S2298
Training
Average
BluffBot
Monash
Holdout
Average
Figure 2: Performance of FBR-experts, RNR-experts, and a near Nash equilibrium strategy (CFR5)
against ?training? opponents and ?hold out? opponents in 50 duplicate matches of 1000 hands.
Revisiting the family of S-bots, we notice that the known similarity of S1239 and S1399 is more
apparent with RNR. The performance of RNR with the correct model against these two players is
close to that of FBR, while the performance with the similar model is only a 6mb/h drop. Essentially,
RNR is forced to exploit only the weaknesses that are general and is robust to small changes. Overall,
RNR offers a similar degree of exploitation to FBR, but with far more robustness.
6
Restricted Nash Experts
We have shown that RNR can be used to find robust counter-strategies. In this section we investigate
their use in an adaptive poker program. We generated four counter-strategies based on the opponents
PsOpti4, A80, S1399, and S2298, and then used these as experts which UCB1 [ACBF02] (a regret
minimizing algorithm) selected amongst. The FBR-experts algorithm used a FBR to each opponent,
and the RNR-experts used RNR to each opponent. We then played these two expert mixtures in
1000 hand matches against both the four programs used to generate the counter strategies as well as
two programs from the 2006 AAAI Computer Poker Competition, which have an unknown origin
and were developed independently of the other programs. We call the first four programs ?training
opponents? and the other two programs ?holdout opponents?, as they are similar to training error
and holdout error in supervised learning.
The results of these matches are shown in Figure 2. As expected, when the opponent matches one of
the training models, FBR-experts and RNR-experts perform better, on average, than a near equilibrium strategy (see ?Training Average? in Figure 2). However, if we look at the break down against
individual opponents, we see that all of FBR?s performance comes from its ability to significantly
exploit one single opponent. Against the other opponents, it actually performs worse than the nonadaptive near equilibrium strategy. RNR does not exploit A80 to the same degree as FBR, but also
does not lose to any opponent.
7
The comparison with the holdout opponents, though, is more realistic and more telling. Since it
is unlikely a player will have a model of the exact program its likely to face in a competition,
it is important for its counter-strategies to exploit general weaknesses that might be encountered.
Our holdout programs have no explicit relationship to the training programs, yet the RNR counterstrategies are still effective at exploiting these programs as demonstrated by the expert mixture
being able to exploit these programs by more than the near equilibrium strategy. The FBR counterstrategies, on the other hand, performed poorly outside of the training programs, demonstrating once
again that RNR counter-strategies are both more robust and more suitable as a basis for adapting
behavior to other agents in the environment.
7
Conclusion
We proposed a new technique for generating robust counter-strategies in multiagent scenarios. The
restricted Nash responses balance exploiting suspected tendencies in other agents? behavior, while
bounding the worst-case performance when the tendency is not observed. The technique involves
computing an approximate equilibrium to a modification of the original game, and therefore can
make use of recently developed algorithms for solving very large extensive games. We demonstrated the technique in the domain of poker, showing it to generate more robust counter-strategies
than traditional best response. We also showed that a simple mixture of experts algorithm based on
restricted Nash response counter-strategies was far superior to using best response counter-strategies
if the exact opponent was not used in training. Further, the restricted Nash experts algorithm outperformed a static non-adaptive near equilibrium at exploiting the previously unseen programs.
References
[ACBF02] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite time analysis of the multiarmed bandit problem.
Machine Learning, 47:235?256, 2002.
[BBD+ 03] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron. Approximating game-theoretic optimal strategies for full-scale poker. In International Joint Conference on Artificial Intelligence, pages 661?668, 2003.
[CM96]
David Carmel and Shaul Markovitch. Learning models of intelligent agents. In Proceedings of the
Thirteenth National Conference on Artificial Intelligence, Menlo Park, CA, 1996. AAAI Press.
[GHPS07] A. Gilpin, S. Hoda, J. Pena, and T. Sandholm. Gradient-based algorithms for finding nash equilibria
in extensive form games. In Proceedings of the Eighteenth International Conference on Game
Theory, 2007.
[GS06]
A. Gilpin and T. Sandholm. A competitive texas hold?em poker player via automated abstraction
and real-time equilibrium computation. In National Conference on Artificial Intelligence, 2006.
[JZB07]
Michael Johanson, Martin Zinkevich, and Michael Bowling. Computing robust counter-strategies.
Technical Report TR07-15, Department of Computing Science, University of Alberta, 2007.
[MB04]
Peter McCracken and Michael Bowling. Safe strategies for agent modelling in games. In AAAI
Fall Symposium on Artificial Multi-agent Learning, October 2004.
[Rob51]
Julia Robinson. An iterative method of solving a game. Annals of Mathematics, 54:296?301, 1951.
[RV02]
Patrick Riley and Manuela Veloso. Planning for distributed execution through use of probabilistic opponent models. In Proceedings of the Sixth International Conference on AI Planning and
Scheduling, pages 77?82, April 2002.
[Sch06]
T.C. Schauenberg. Opponent modelling and search in poker. Master?s thesis, University of Alberta,
2006.
[ZBB07]
M. Zinkevich, M. Bowling, and N. Burch. A new algorithm for generating strong strategies in massive zero-sum games. In Proceedings of the Twenty-Seventh Conference on Artificial Intelligence
(AAAI), 2007. To Appear.
[ZJBP08]
M. Zinkevich, M. Johanson, M. Bowling, and C. Piccione. Regret minimization in games with
incomplete information. In Neural Information Processing Systems 21, 2008.
[ZL06]
M. Zinkevich and M. Littman. The AAAI computer poker competition. Journal of the International
Computer Games Association, 29, 2006. News item.
8
| 3347 |@word private:3 version:4 maz:1 exploitation:8 stronger:1 szafron:1 dramatic:2 reduction:1 selecting:1 past:1 current:1 yet:2 must:1 realistic:1 subcomponent:1 treating:1 designed:3 plot:1 drop:1 intelligence:4 selected:2 item:1 iterates:1 provides:2 node:1 five:3 become:1 symposium:1 consists:3 compose:1 introduce:3 sacrifice:1 indeed:1 expected:5 behavior:9 examine:1 planning:2 multi:1 terminal:1 alberta:3 actual:1 considering:2 begin:1 unrelated:1 mb04:3 what:1 substantially:1 developed:4 finding:4 unobserved:1 guarantee:1 every:4 concave:1 interactive:1 ro:1 wrong:4 control:2 appear:1 producing:1 safety:1 positive:4 limit:3 consequence:1 monash:2 approximately:2 tournament:2 bankroll:1 might:2 challenging:1 range:1 commerce:1 practice:2 regret:4 empirical:1 significantly:3 adapting:3 thought:3 matching:2 word:1 convenient:1 close:2 scheduling:1 put:1 impossible:1 restriction:1 zinkevich:5 demonstrated:2 eighteenth:1 maximizing:2 go:1 independently:1 assigns:1 examines:1 importantly:1 markovitch:1 autonomous:1 increment:1 annals:1 imagine:1 play:6 commercial:1 ualberta:3 suppose:1 losing:1 programming:1 us:5 caption:1 exact:2 origin:1 massive:1 trend:1 approximated:1 observed:5 capture:1 worst:2 calculate:1 revisiting:1 news:1 counter:30 substantial:1 intuition:1 environment:2 nash:24 ui:1 moderately:1 littman:1 battery:1 trained:1 raise:3 solving:6 reviewing:1 compromise:1 predictive:1 max1:1 basis:1 joint:1 chip:3 various:1 assistive:1 forced:2 effective:6 describe:3 artificial:5 choosing:3 outcome:1 outside:1 whose:1 apparent:2 larger:1 otherwise:1 favor:1 ability:2 fischer:1 unseen:1 think:1 itself:2 online:1 advantage:1 sequence:2 took:1 propose:1 mb:23 adaptation:2 remainder:3 realization:1 mixing:2 poorly:4 academy:1 description:1 competition:6 exploiting:8 produce:3 t6g2e8:1 perfect:5 generating:3 cfr5:9 strong:3 pot:7 c:3 involves:4 come:2 indicate:1 differ:1 safe:13 closely:1 correct:2 opteron:1 human:1 require:2 fix:10 really:1 hold:10 accompanying:3 deciding:2 equilibrium:32 mapping:1 claim:1 vary:1 outperformed:1 lose:4 label:1 maker:1 title:1 successfully:1 minimization:3 always:2 modified:3 johanson:4 bet:2 varying:1 hs2:1 earliest:1 focus:2 improvement:2 modelling:2 greatly:1 abstraction:18 typically:1 unlikely:1 initially:1 her:1 bandit:1 shaul:1 issue:1 overall:2 among:3 priori:2 proposes:1 thousandth:1 plan:1 equal:1 construct:3 never:1 having:1 once:1 identical:1 park:1 look:2 hyperborean:1 report:4 intelligent:1 duplicate:3 few:1 national:2 individual:1 argmax:3 ab:1 attempt:1 highly:4 investigate:1 evaluation:2 weakness:4 mixture:5 brp:2 closer:2 capable:1 necessary:1 machinery:1 tree:3 incomplete:1 old:1 modeling:1 bbd:3 maximization:2 riley:1 successful:2 seventh:1 considerably:1 chooses:4 combined:1 fundamental:1 river:1 international:4 probabilistic:1 off:1 michael:5 together:1 thesis:1 aaai:8 again:1 cesa:1 choose:2 possibly:1 necessitate:1 s2298:7 creating:1 expert:22 worse:1 explicitly:1 performed:1 later:2 root:1 try:1 break:1 observing:2 start:1 competitive:1 formed:1 who:1 efficiently:1 conceptually:1 dealt:2 bayesian:1 comparably:1 mere:1 datapoint:1 reach:1 definition:1 sixth:1 against:22 competitor:1 frequency:1 obvious:1 associated:2 proof:1 schaeffer:1 static:1 holdout:5 schauenberg:2 recall:3 actually:2 auer:1 day:1 supervised:1 response:54 april:1 done:1 though:2 just:3 hand:20 receives:1 quality:1 believe:1 concept:3 round:5 indistinguishable:1 bowling:7 game:63 self:1 noted:1 won:2 criterion:1 theoretic:2 demonstrate:3 julia:1 performs:2 percent:2 meaning:1 recently:3 common:3 superior:1 empirically:1 winner:1 defeat:1 million:3 acbf02:3 slight:1 association:1 pena:1 significant:2 multiarmed:1 ai:1 mathematics:1 similarly:2 had:2 reachable:1 similarity:2 pu:1 add:1 patrick:1 posterior:1 own:1 recent:2 showed:1 scenario:3 occasionally:1 certain:2 exploited:2 holte:1 additional:2 employed:1 paradigm:6 full:11 mix:1 sham:1 technical:4 match:8 veloso:1 offer:1 divided:2 variant:3 basic:1 involving:1 essentially:1 iteration:1 sometimes:1 achieved:1 background:2 want:5 addition:1 thirteenth:1 bluffbot:2 biased:1 unlike:1 suspect:1 showdown:1 thing:3 effectiveness:1 call:4 near:11 ideal:1 revealed:2 enough:2 identically:1 automated:1 variety:3 affect:1 tr07:1 identified:2 suboptimal:2 imperfect:4 idea:2 billing:1 br:7 tradeoff:5 texas:9 whether:1 six:1 utility:6 penalty:1 rnr:32 peter:1 action:16 generally:1 useful:1 involve:2 informally:1 maybe:1 amount:4 reduced:1 generate:2 notice:7 bot:2 designer:1 per:1 key:1 four:4 demonstrating:1 v1:2 nonadaptive:1 sum:4 you:5 fourth:1 powerful:1 master:1 throughout:1 reasonable:2 family:2 electronic:1 decision:6 bound:1 followed:2 played:6 fold:3 encountered:1 constraint:1 burch:2 your:1 afforded:1 u1:6 performing:4 martin:2 betting:8 relatively:2 department:2 poor:1 beneficial:1 sandholm:2 em:9 appealing:1 making:2 modification:3 restricted:13 pr:4 computationally:2 remains:1 previously:3 describing:1 turn:1 count:1 know:1 flip:1 letting:1 tractable:2 available:1 operation:2 opponent:46 apply:3 observe:1 v2:1 appearing:1 frequentist:6 alternative:2 coin:1 robustness:1 original:2 entertainment:1 exploit:13 practicality:1 approximating:1 occurs:1 strategy:109 usual:1 diagonal:3 traditional:1 poker:25 amongst:2 win:5 gradient:1 card:8 seven:4 amd:1 trivial:1 besides:1 relationship:1 balance:4 minimizing:1 october:1 negative:1 unknown:2 perform:4 bianchi:1 twenty:1 observation:4 finite:2 displayed:1 beat:2 flop:1 situation:1 head:4 interacting:1 varied:1 arbitrary:3 community:3 canada:1 inferred:1 david:1 pair:3 extensive:16 alternately:1 robinson:1 address:1 able:2 adversary:1 challenge:1 program:23 max:3 green:1 event:4 suitable:1 technology:1 brief:1 axis:2 millibet:1 multiagent:3 expect:2 brittle:4 piccione:1 interesting:1 mccracken:2 proven:1 versus:2 agent:19 degree:3 sufficient:1 suspected:3 playing:5 share:1 row:3 placed:1 infeasible:1 bias:1 weaker:1 telling:1 fall:1 face:2 fifth:1 distributed:2 ghz:1 curve:1 default:1 evaluating:1 commonly:1 adaptive:4 made:2 far:2 approximate:8 dealing:1 suggestive:1 manuela:1 davidson:1 don:1 search:1 iterative:1 table:5 carmel:1 learn:1 nature:1 robust:13 ca:4 exploitability:10 reasonably:1 menlo:1 hoda:1 domain:7 did:1 significance:1 linearly:1 bounding:2 millibets:1 allowed:1 exploitable:4 edmonton:1 explicit:1 winning:2 third:2 theorem:3 down:1 showing:2 evidence:1 burden:1 intractable:2 fbr:33 exists:1 adding:1 merging:1 magnitude:1 execution:1 margin:1 ucb1:2 unabstracted:1 simply:2 likely:3 explore:2 failed:1 bo:1 u2:2 psopti4:12 loses:2 chance:1 worstcase:1 sized:2 feasible:1 change:2 typical:1 called:2 total:1 tendency:6 player:40 exception:2 select:1 formally:1 gilpin:2 ex:1 |
2,589 | 3,348 | Fast and Scalable Training of Semi-Supervised CRFs
with Application to Activity Recognition
Maryam Mahdaviani
Computer Science Department
University of British Columbia
Vancouver, BC, Canada
Tanzeem Choudhury
Intel Research
1100 NE 45th Street
Seattle, WA 98105,USA
Abstract
We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs). In real-world
applications such as activity recognition, unlabeled sensor traces are relatively
easy to obtain whereas labeled examples are expensive and tedious to collect.
Furthermore, the ability to automatically select a small subset of discriminatory
features from a large pool can be advantageous in terms of computational speed as
well as accuracy. In this paper, we introduce the semi-supervised virtual evidence
boosting (sVEB) algorithm for training CRFs ? a semi-supervised extension to the
recently developed virtual evidence boosting (VEB) method for feature selection
and parameter learning. The objective function of sVEB combines the unlabeled
conditional entropy with labeled conditional pseudo-likelihood. It reduces the
overall system cost as well as the human labeling cost required during training,
which are both important considerations in building real-world inference systems.
Experiments on synthetic data and real activity traces collected from wearable
sensors, illustrate that sVEB benefits from both the use of unlabeled data and automatic feature selection, and outperforms other semi-supervised approaches.
1
Introduction
Conditional random fields (CRFs) are undirected graphical models that have been successfully applied to the classification of relational and temporal data [1]. Training complex CRF models with
large numbers of input features is slow, and exact inference is often intractable. The ability to select
the most informative features as needed can reduce the training time and the risk of over-fitting of
parameters. Furthermore, in complex modeling tasks, obtaining the large amount of labeled data
necessary for training can be impractical. On the other hand, large unlabeled datasets are often easy
to obtain, making semi-supervised learning methods appealing in various real-world applications.
The goal of our work is to build an activity recognition system that is not only accurate but also scalable, efficient, and easy to train and deploy. An important application domain for activity recognition
technologies is in health-care, especially in supporting elder care, managing cognitive disabilities,
and monitoring long-term health. Activity recognition systems will also be useful in smart environments, surveillance, emergency and military missions. Some of the key challenges faced by current
activity inference systems are the amount of human effort spent in labeling and feature engineering
and the computational complexity and cost associated with training. Data labeling also has privacy
implications because it often requires human observers or recording of video. In this paper, we introduce a fast and scalable semi-supervised training algorithm for CRFs and evaluate its classification
performance on extensive real world activity traces gathered using wearable sensors. In addition
to being computationally efficient, our proposed method reduces the amount of labeling required
during training, which makes it appealing for use in real world applications.
1
Several supervised techniques have been proposed for feature selection in CRFs. For discrete features, McCallum [2] suggested an efficient method for feature induction by iteratively increasing
conditional log-likelihood. Dietterich [3] applied gradient tree boosting to select features in CRFs
by combining boosting with parameter estimation for 1D linear-chain models. Boosted random
fields (BRFs) [4] combine boosting and belief propagation for feature selection and parameter estimation for densely connected graphs that have weak pairwise connections. Recently, Liao et.al. [5]
developed a more general version of BRFs, called virtual evidence boosting (VEB) that does not
make any assumptions about graph connectivity or the strength of pairwise connections. The objective function in VEB is a soft version of maximum pseudo-likelihood (MPL), where the goal is
to maximize the sum of local log-likelihoods given soft evidence from its neighbors. This objective
function is similar to that used in boosting, which makes it suitable for unified feature selection and
parameter estimation. This approximation applies to any CRF structures and leads to a significant
reduction in training complexity and time. Semi-supervised training techniques have been extensively explored in the case of generative models and naturally fit under the expectation maximization
framework [6]. However, it is not straight forward to incorporate unlabeled data in discriminative
models using the traditional conditional likelihood criteria. A few semi-supervised training methods for CRFs have been proposed that introduce dependencies between nearby data points [7, 8].
More recently, Grandvalet and Bengio [9] proposed a minimum entropy regularization framework
for incorporating unlabeled data. Jiao et.al. [10] used this framework and proposed an objective
function that combines the conditional likelihood of the labeled data with the conditional entropy of
the unlabeled data to train 1D CRFs, which was extended to 2D lattice structures by Lee et.al. [11].
In our work, we combine the minimum entropy regularization framework for incorporating unlabeled data with VEB for training CRFs. The contributions of our work are: (i) semi-supervised
virtual evidence boosting (sVEB) - an efficient technique for simultaneous feature selection and
semi-supervised training of CRFs, which to the best of our knowledge is the first method of its
kind, (ii) experimental results that demonstrate the strength of sVEB, which consistently outperforms other training techniques on synthetic data and real-world activity classification tasks, and
(iii) analysis of the time and complexity requirements of our algorithm, and comparison with other
existing techniques that highlight the significant computational advantages of our approach. The
sVEB algorithm is fast and easy to implement and has the potential of being broadly applicable.
2
Approaches to training of Conditional Random Fields
Maximum likelihood parameter estimation in CRFs involves maximizing the overall conditional
log-likelihood, where x is the observation sequence and y is the hidden state sequence:
K
P
exp(
?k fk (x, y))
k=1
? k?k/2
L(?) = log(p(y|x, ?)) ? k?k/2 = log
(1)
K
P
P
0
exp(
?k fk (x, y ))
y0
k=1
The conditional distribution is defined by a log-linear combination of k features functions fk associated with weight ?k . A regularizer on ? is used to keep the weights from getting too large and
to avoid overfitting1 . For large CRFs exact inference is often intractable and approximate methods
such as mean field approximation or loopy belief propagation [12, 13] are used.
An alternative to approximating the conditional likelihood is to change the objective function.
MPL [14] and VEB [5] are such techniques. For MPL the CRF is cut into a set of independent
patches; each patch consists of a hidden node or class label yi , the true value of its direct neighbors
and the observations, i.e., the Markov Blanket(M Byi ) of the node. The parameter estimation then
becomes maximizing the pseudo log-likelihood:
K
P
exp(
?k fk (M Byi ,yi ))
N
N
P
P
k=1
Lpseudo (?) =
log(p(yi |M Byi , ?)) =
log P
K
P
i=1
i=1
exp(
y0
i
k=1
?k fk (M By0 ,yi0 ))
i
MPL has been known to over-estimate the dependency parameters in some cases and there is no
general guideline on when it can be safely used [15].
1
When a prior is used in the maximum likelihood objective function as a regularizer ? the second term in eq. (1), the method is in fact
called maximum a posteriori.
2
2.1 Virtual evidence boosting
By extending the standard LogitBoost algorithm [16], VEB integrates boosting based feature selection into CRF training. The objective function used in VEB is very similar to MPL, except that
VEB uses the messages from the neighboring nodes as virtual evidence instead of using the true
labels of neighbors. The use of virtual evidence helps to reduce over-estimation of neighborhood
dependencies. We briefly explain the approach here but please refer to [5] for more detail.
VEB incorporates two types of observations nodes: (i) hard evidence corresponding to the observations ve(xi ), which are indicator functions at the observation values and (ii) soft evidence, corresponding to the messages from neighboring nodes ve(n(yi )), which are discrete distributions over
the hidden states. Let vei , {ve(xi ), ve(n(yi ))}. The objective function of VEB is as follows:
K
P
P
vei exp(
?k fk (vei , yi ))
N
X
vei
k=1
(2)
LV EB (?) =
log(p(yi |vei , ?)), where p(yi |vei , ?) =
K
PP
P
i=1
vei exp(
?k fk (vei , yi0 ))
yi0 vei
k=1
VEB learns a set weak learners ft s iteratively and estimates the combined feature Ft = Ft?1 + ft
by solving the following weighted least square error(WLSE) problem:
N X
X
wi E(f (vei ) ? zi )2 = arg min[
wi p(yi |vei )(f (vei ) ? zi )2 ]
f
i=1
i=1 vei
yi ? 0.5
where wi = p(yi |vei )(1 ? p(yi |vei )), zi =
p(yi |vei )
ft (vei ) = arg min
f
N
X
(3)
(4)
The wi and zi in equation 4 are the boosting weight and working response respectively for the ith
data point, exactly as in LogitBoost. However, the least square problem for VEB (eq.3) involves
N X points because of virtual evidence as opposed to N points in LogitBoost. Although eq. 4 is
given for the binary case (i.e. yi ? {0, 1}), it is easily extendible to the multi-class case and we have
done that in our experiments. At each iteration, vei is updated as messages from n(yi ) changes with
the addition of new features. We run belief propagation (BP) to obtain the virtual evidence before
each iteration. The CRF feature weights, ??s are computed by solving the WLSE problem, where
the local features, nki is the count of feature k in data instance i and the compatibility features, nki
N
N
P
P
is the virtual evidence from the neighbors.: ?k =
wi zi nki / wi nki .
i=1
i=1
2.2 Semi-supervised training
For semi-supervised training of CRFs, Jiao et.al. [10] have proposed an algorithm that utilizes unlabeled data via entropy regularization ? an extension of the approach proposed by [9] to structured
CRF models. The objective function that is maximized during semi-supervised training of CRFs is
given below, where (xl , yl ) and (xu , yu ) represent the labeled and unlabeled data respectively:
P
LSS (?) = log p(yl |xl , ?) + ? p(yu |xu , ?)log p(yu |xu , ?) ? k?k/2
yu
By minimizing the conditional entropy of the unlabeled data, the algorithm will generally find labeling of the unlabeled data that mutually reinforces the supervised labels. One drawback of this
objective function is that it is no longer concave and in general there will be local maxima. The
authors [10] showed that this method is still effective in improving an initial supervised model.
3
Semi-supervised virtual evidence boosting
In this work, we develop semi-supervised virtual evidence boosting (sVEB) that combines feature
selection with semi-supervised training of CRFs. sVEB extends the VEB framework to take advantage of unlabeled data via minimum entropy regularization similar to [9, 10, 11]. The new objective
function LsV EB we propose is as follows, where (i = 1 ? ? ? N ) are labeled and (i = N + 1 ? ? ? M )
N
M
are unlabled examples:
X
X
X
p(yi0 |vei ) log p(yi0 |vei )
(5)
LsV EB =
log p(yi |vei ) + ?
i=1
i=N +1 yi0
3
The sVEB aglorithm, similar to VEB, maximizes the conditional soft pseudo-likelihood of the labeled data but in addition minimizes the conditional entropy over unlabeled data. The ? is a tuning
parameter for controlling how much influence the unlabeled data will have.
By considering the soft pseudo-likelihood in LsV EB and using BP to estimate p(yi |vei ), sVEB can
use boosting to learn the parameters of CRFs. The virtual evidence from the neighboring nodes
captures the label dependencies. There are three different types of feature functions f s that?s used:
for continuous observations f1 (xi ) is a linear combination of decision stumps, for discrete observations the learner f2 (xi ) is expressed as indicator functions, and for virtual evidences the weak
learner f3 (xi ) is the weighted sum of two indicator functions (for binary case). These functions are
computed as follows, where ? is an indicator function, h is a threshold for the decision stump, and
D is the number of dimensions of the observations:X
D
1
X
f1 (xi ) = ?1 ?(xi ? h) + ?2 ?(xi < h), f2 (xi ) =
?k ?(xi = d), f3 (yi ) =
?k ?(yi = k) (6)
k=1
k=0
Similar to LogitBoost and VEB, the sVEB algorithm estimates a combined feature function F that
maximizes the objective by sequentially learning a set of weak learners, ft ?s (i.e. iteratively selecting
features). In other words, sVEB solves the following weighted least-square error (WLSE) problem
N X
M
to learn ft s:
X
X
XX
ft = arg min[
wi p(yi |vei )(f (xi ) ? zi )2 +
wi p(yi0 |vei )(f (xi ) ? zi )2 ] (7)
f
i=1 vei
i=N +1 yi0
vei
For labeled data (first term in eq.7), boosting weights, wi ?s, and working responses, zi ?s, are computed as described in equation 4. But for the case of unlabeled data the expression for wi and zi
becomes more complicated because of the entropy term. We present the equations for wi and zi
below, please refer to the Appendix for the derivations:
wi = ?2 (1 ? p(yi |vei ))[p(yi |vei )(1 ? p(yi |vei )) + log p(yi |vei )]
(yi ? 0.5)p(yi |vei )(1 ? log p(yi |vei ))
(8)
?[p(yi |vei )(1 ? p(yi |vei )) + log p(yi |vei )]
The soft evidence corresponding to messages from the neighboring nodes is obtained by running BP
on the entire training dataset (labeled and unlabeled). The CRF feature weights ?k s are computed
M P
M P
P
P
by solving the WLSE problem (e.q.(7)), ?k =
wi zi nki /
wi nki
zi =
i=1 yi
i=1 yi
Algorithm 1 gives the pseudo-code for sVEB. The main difference between VEB and sVEB are
steps 7 ? 10, where we compute wi ?s and zi ?s for all possible values of yi based on the virtual
evidence and observations of unlabeled training cases. The boosting weights and working responses
are computed using equation (8). The weighted least-square error (WLSE) equation (eq. 7) in step
10 of sVEB is different from that of VEB and the solution results in slightly different CRF feature
weights, ??s. One of the major advantages of VEB and sVEB over ML and sML is that the parameter
estimation is done by mainly performing feature counting. Unlike ML and sML, we do not need to
use an optimizer to learn the model parameters which results in a huge reduction in the time required
to train the CRF models. Please refer to the complexity analysis section for details.
4
Experiments
We conduct two sets of experiments to evaluate the performance of the sVEB method for training
CRFs and the advantage of performing feature selection as part of semi-supervised training. In
the first set of experiments, we analyze how much the complexity of the underlying CRF and the
tuning parameter ? effect the performance using synthetic data. In the second set of experiments, we
evaluate the benefit of feature selection and using unlabeled data on two real-world activity datasets.
We compare the performance of the semi-supervised virtual evidence boosting(sVEB) presented in
this paper to the semi-supervised maximum likelihood (sML) method [10]. In addition, for the activity datasets, we also evaluate an alternative approach (sML+Boost), where a subset of features
is selected in advance using boosting. To benchmark the performance of the semi-supervised techniques, we also evaluate three different supervised training approaches, namely maximum likelihood
4
Algorithm 1: Training CRFs using semi-supervised VEB
inputs : structure of CRF and training data (xi , yi ), with yi ? {0, 1}, 1 ? i ? M , and F0 = 0
output: Learned FT and their corresponding weights, ?
2
3
4
5
6
7
8
9
10
11
12
for t = 1, 2, ? ? ? , T do
Run BP using Ft to get virtual evidences vei ;
for i = 1, 2, ? ? ? , N do
Compute likelihood p(yi |vei );
Compute wi and zi using equation (4)
end
for i = N + 1, ..., M and yi = 0, 1 do
Compute likelihood p(yi |vei );
Compute wi and zi using equation (8)
end
Obtain ?best? weak learner ft according to equation (7) and update Ft = Ft?1 + ft ;
end
(a)
0.9
(c)
0.95
0.85
Accuracy
0.8
Accuracy
1
(b)
0.9
0.7
Accuracy
1
0.8
0.75
0.7
0.6
sML
sVEB
sML
sVEB
0.65
0.5
100
200
300
400
Dimension of Observations
500
0.8
sML
sVEB
0.75
0.6
0
0.9
0.85
0.7
0
10
20
Number of states
30
40
0
2
4
6
8
10
Values of ?
Figure 1: Accuracy of sML and sVEB for different number of states, local features and different values of ?.
method using all observed features(ML), (ML+Boost) using a subset of features selected in advance,
and virtual evidence boosting (VEB). All the learned models are tested using standard maximum a
posteriori(MAP) estimate and belief propagation. We used a l2 -norm shrinkage prior as a regularizer
for the ML and sML methods.
4.1
Synthetic data
The synthetic data is generated using a first-order Markov Chain with self-transition probabilities
set to 0.9. For each model, we generate five sequences of length 4,000 and divide each trace into
sequences of length 200. We randomly choose 50% of them as the labeled and the other 50% as unlabeled training data. We perform leave-one-out cross-validation and report the average accuracies.
To measure how the complexity of the CRFs affects the performance of the different semi-supervised
methods, we vary the number of local features and the number of states. First, we compare the performance of sVEB and sML on CRFs with increasing the number of features. The number of states
is set to 10 and the number of observation features is varied from 20 to 400 observations. Figure
(1a) shows the average accuracy for the two semi-supervised training methods and their confidence
intervals. The experimental results demonstrate that sVEB outperforms sML as we increase the dimension of observations (i.e. the number of local features). In the second experiment, we increase
the number of classes and keep the dimension of observations fixed to 100. Figure (1b) demonstrates
that sVEB again outperforms sML as we increase the number of states. Given the same amount of
training data, sVEB is less likely to overfit because of the feature selection step. In both these experiments we set the value of tuning parameter, ?, to 1.5. To explore the effect of tuning parameter
?, we vary the value of ? from 0.1 to 10 , while setting the number of states to 10 and the number
of dimensions to 100. Figure (1c) shows that the performance of both sML and sVEB depends on
the value of ? but the accuracy decreases for large ??s similar to the sML results presented in [10].
5
8
Classes
Sensor Traces
7
6
Ground truth
Inference
5
4
3
2
1
1000
Time
2000
3000
Time
4000
5000
Figure 2: An example of a sensor trace and a classification trace
Labeled Average Accuracy (%) - Dataset 1
Labeled Average Accuracy (%) - Dataset 2
ML+all obs ML+Boost VEB
ML+all obs ML+Boost VEB
60% 62.7 ? 6.6 69.4 ? 3.9 82.6 ? 7.3
60% 74.3 ? 3.7 75.8 ? 3.3 88.5 ? 5.1
80% 73.0 ? 4.2 81.8 ? 4.7 90.3 ? 4.7
80% 80.6 ? 2.9 84.8 ? 2.9 93.4 ? 3.8
100% 77.8 ? 3.4 87.0 ? 2.3 91.5 ? 3.8
100% 86.2 ? 3.1 87.5 ? 3.1 93.8 ? 4.6
Table 1: Accuracy ? 95% confidence interval of the supervised algorithms on activity datasets 1 and 2
4.2 Activity dataset
We collected two activity datasets using wearable sensors, which include audio, acceleration, light,
temperature, pressure, and humidity. The first dataset contains instances of 8 basic physical activities
(e.g. walking, running, going up/down stairs, going up/down elevator, sitting, standing, and brushing
teeth) from 7 different users. There is on average 30 minutes of data per user and a total of 3.5 hours
of data that is manually labeled for training and testing purposes. The data is segmented into 0.25s
chunks resulting in a total of 49613 data points. For each chunk, we compute 651 features, which
include signal energy in log and linear frequency bands, autocorrelation, different entropy measures,
mean, variances etc. The features are chosen based on what is used in existing activity recognition
literature and a few additional ones that we felt could be useful. During training, the data from
each person is divided into sequences of length 200 and fed into linear chain CRFs as observations.
The second dataset contains instances of 5 different indoor activities (e.g. computer usage, meal,
meeting, watching TV and sleeping) from a single user. We recorded 15 hours of sensor traces over
12 days. As this set contains longer time-scale activities, the data is segmented into 1 minute chunks
and 321 different features are computed, similar to the first dataset. There are a total of 907 data
points. These features are fed into CRFs as observations, one linear chain CRF is created per day.
We evaluate the performance of supervised and semi-supervised training algorithms on these two
datasets. For the semi-supervised case, we randomly select 40% of the sequences for a given person
or a given day as labeled and a different subset as the unlabeled training data. We compare the
performance of sML and sVEB as we incorporate more unlabeled data (20%, 40% and 60%) into
the training process. We also compare the supervised techniques, ML, ML+Boost, and VEB, with
increasing amount of labeled data. For all the experiments, the tuning parameter ? is set to 1.5. We
perform leave-one-person-out cross-validation on dataset 1 and leave-one-day-out cross-validation
on dataset 2 and report the average the accuracies. The number of features chosen (i. e. through
the boosting iterations) is set to 50 for both datasets ? including more features did not significantly
improve the classification performance.
For both datasets, incorporating more unlabeled data improves accuracy. The sML estimate of the
CRF parameters performs the worst. Even with the shrinkage prior, the high dimensionality can still
cause over-fitting and lower the accuracy. Whereas parameter estimation and feature selection via
sVEB consistently results in the highest accuracy. The (sML+Boost) method performs better than
sML but does not perform as well as when feature selection and parameter estimation is done within
a unified framework as in sVEB. Table 2 summarize our results. The results of supervised learn-
Un- Average Accuracy (%) - Dataset 1
Un- Average Accuracy (%) - Dataset 2
labeled sML+all obs sML+Boost sVEB
labeled sML+all obs sML+Boost sVEB
20% 60.8 ? 5.4 66.4 ? 4.2 72.6 ? 2.3
20% 71.4 ? 3.2 70.5 ? 5.3 79.9 ? 4.2
40% 68.1 ? 4.8 76.8 ? 3.4 78.5 ? 3.4
40% 73.5 ? 5.8 74.1 ? 4.6 83.5 ? 6.3
60% 74.9 ? 3.1 81.3 ? 3.9 85.3 ? 4.1
60% 75.6 ? 3.9 77.8 ? 3.2 87.4 ? 4.7
Table 2: Accuracy ? 95% confidence interval of semi-supervised algorithms on activity datasets 1 and 2
6
Labeled Average Accuracy (%) - Dataset 2
Labeled Average Accuracy (%) - Dataset 2
ML+all obs ML+Boost VEB
ML+all obs ML+Boost VEB
5% 59.2 ? 6.5 65.7 ? 8.3 71.2 ? 5.7
5% 71.2 ? 4.1 68.3 ? 6.7 79.7 ? 7.9
20% 66.9 ? 5.9 67.3 ? 8.5 77.4 ? 3.6
20% 71.4 ? 6.3 73.8 ? 5.2 83.1 ? 6.4
Table 3: Accuracy ? 95% confidence interval of semi-supervised algorithms on activity datasets 1 and 2
ing algorithms are presented in Table 1. Similar to the semi-supervised results, the VEB method
performs the best, the ML is the worst performer, and the accuracy numbers for the (ML+Boost)
method is in between. The accuracy increases if we incorporate more labeled data during training.
To evaluate sVEB when a small amount of labeled data is available, we performed another set of
experiments on datasets 1 and 2, where only 5% and 20% of the training data is labeled respectively. We used all the available unlabeled data during training. The results are shown in table 3.
These experiments clearly demonstrate that although adding more unlabeled data is not as helpful
as incorporating more labeled data, the use of cheap unlabeled data along with feature selection can
significantly boost the performance of the models.
4.3 Complexity Analysis
The sVEB and VEB algorithm are significantly faster than ML and sML because they do not need
to use optimizers such as quasi-newton methods to learn the weight parameters. For each training
iteration in sML the cost of running BP is O(cl ns2 + cu n2 s3 ) [10] whereas the cost of each boosting
iteration in sVEB is O((cl + cu )ns2 ). An efficient entropy gradient computation is proposed in [17],
which reduces the cost of sML to O((cl + cu )ns2 ) but still requires an optimizer to maximize the
log-likelihood. Moreover, the number of training iterations needed is usually much higher than the
number of boosting iterations because optimizers such as L-BFGS require many more iterations to
reach convergence in high dimensional spaces. For example, for dataset 1, we needed about 1000
iterations for sML to converge but we ran sVEB for only 50 iterations. Table 4 shows the time for
performing the experiments on activity datasets (as described in the previous section) 2 . On the
other hand the space complexity of sVEB is linearly smaller than sML and ML. Similar to ML, sML
has the space complexity of O(ns2 D) in the best case [10]. VEB and sVEB have a lower space
cost of O(ns2 Db ), because of the feature selection step Db ? D usually. Therefore, the difference
becomes significant when we are dealing with high dimensional data, particularly if they include a
large number of redundant features.
n
cl
cu
s
D, Db
length of training sequence
number of labeled training sequences
ML ML+Boost VEB sML sML+Boost sVEB
number of unlabeled training sequences
Dataset 1 34
18
2.5 96
48
4
number of states
Dataset 2 7.5 4.25
0.4 10.5
8
0.6
dimension of observations
Table 4: Training time for the different algorithms.
Time (hours)
5
Conclusion
We presented sVEB, a new semi-supervised training method for CRFs, that can simultaneously
select discriminative features via modified LogitBoost and utilize unlabeled data via minimumentropy regularization. Our experimental results demonstrate the sVEB significantly outperforms
other training techniques in real-world activity recognition problems. The unified framework for
feature selection and semi-supervised training presented in this paper reduces the computational and
human labeling costs, which are often the major bottlenecks in building large classification systems.
Acknowledgments
The authors would like to thank Nando de Freitas and Lin Liao for many helpful discussions. This work was
supported by the NSF under grant number IIS 0433637 and NSERC Canada Graduate Scholarship.
References
[1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In Proc. of the International Conference on Machine Learning (ICML), 2001.
2
The experiments were run in Matlab environment and as a result they took longer.
7
[2] Andrew McCallum. Efficiently inducing features or conditional random fields. In Proc. of the Conference
on Uncertainty in Artificial Intelligence (UAI), 2003.
[3] T. Dietterich, A. Ashenfelter, and Y. Bulatov. Training conditional random fields via gradient tree boosting. In Proc. of the International Conference on Machine Learning (ICML), 2004.
[4] A. Torralba, K. P. Murphy, and W. T. Freeman. Contextual models for object detection using boosted
random fields. In Advances in Neural Information Processing Systems (NIPS), 2004.
[5] L. Liao, T. Choudhury, D. Fox, and H Kautz. Training conditional random fields using virtual evidence
boosting. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), 2007.
[6] K. Nigam, A. McCallum, A. Thrun, and T. Mitchell. Text classification from labeled and unlabeled
documents using em. Machine learning, 2000.
[7] A. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In Proc. of the International Conference on Machine Learning (ICML), 2003.
[8] W. Li and M. Andrew. Semi-supervised sequence modeling with syntactic topic models. In Proc. of the
National Conference on Artificial Intelligence (AAAI), 2005.
[9] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In Advances in Neural
Information Processing Systems (NIPS), 2004.
[10] F. Jiao, W. Wang, C. H. Lee, R. Greiner, and D. Schuurmans. Semi-supervised conditional random
fields for improved sequence segmentation and labeling. In International Committee on Computational
Linguistics and the Association for Computational Linguistics, 2006.
[11] C. Lee, S. Wang, F. Jiao, Schuurmans D., and R. Greiner. Learning to Model Spatial Dependency: SemiSupervised Discriminative Random Fields. In NIPS, 2006.
[12] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282?2312, 2005.
[13] Y. Weiss. Comparing mean field method and belief propagation for approximate inference in mrfs. 2001.
[14] J. Besag. Statistical analysis of non-lattice data. The Statistician, 24, 1975.
[15] C. J. Geyer and E. A. Thompson. Constrained Monte Carlo Maximum Likelihood for dependent data.
Journal of Royal Statistical Society, 1992.
[16] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of
boosting. The Annals of Statistics, 38(2):337?374, 2000.
[17] G. Mann and A. McCullum. Efficient computation of entropy gradient for semi-supervised conditional
random fields. In Human Language Technologies, 2007.
6
Appendix
In this section, we show how we derived the equations for wi and zi (eq. 8):
LF = LsV EB = LV EB ? ?Hemp =
N
P
log p(yi |vei ) + ?
M
P
P
i=N +1 y 0
i
i=1
p(yi0 |vei ) log p(yi0 |vei )
As in LogitBoost, the likelihood function LF is maximized by learning an ensemble of weak learners. We start
with an empty ensemble F = 0 and iteratively add the next best weak learner, ft , by computing the Newton
s
, where s and H are the first and second derivative respectively of LF with respect to f (vei , yi ).
update H
F (vei , yi )) ? F (vei , yi ) ?
s=
N
P
s
,
H
2(2yi ? 1)(1 ? p(yi |vei )) + ?
N
P
?LF +f
|f =0
?f
M
P
P
i=N +1 y 0
i
i=1
H=?
where s =
p(yi0 |vei )) + log p(yi0 |vei )]
F ? F+
zi wi +
N
P
i=1
and
wi =
M
P
P
i=N +1 y 0
i
M
P
P
wi +
i=N +1 y 0
i
M
P
P
i=N +1 y 0
i
i=1
N
P
(
zi wi
where zi =
wi
? 2 LF +f
|f =0
?f 2
[2(2yi0 ? 1)(1 ? p(yi0 |vei ))p(yi0 |vei )(1 ? log p(yi0 |vei ))]
4p(yi |vei )(1 ? p(yi |vei ))(2yi ? 1)2 + ?2
i=1
and H =
4(2yi0 ? 1)2 (1 ? p(yi0 |vei ))[p(yi0 |vei )(1 ?
yi ?0.5
p(yi |vei )
(yi0 ?0.5)p(yi0 |vei )(1?log p(yi0 |vei ))
?[p(yi0 |vei )(1?p(yi0 |vei ))+log p(yi0 |vei )]
p(yi |vei )(1 ? p(yi |vei ))
?2 (1 ? p(yi0 |vei ))[p(yi0 |vei )(1 ? p(yi0 |vei )) + log p(yi0 |vei )]
eq. (4)
if N < i ? M
eq. (8)
if 1 ? i ? N
if N < i ? M
At iteration t we get the best weak learner, ft , by solving the WLSE problem in eq. 7.
8
if 1 ? i ? N
eq. (4)
eq. (8)
| 3348 |@word cu:4 briefly:1 version:2 advantageous:1 norm:1 yi0:29 humidity:1 tedious:1 pressure:1 reduction:2 initial:1 contains:3 selecting:1 bc:1 document:1 outperforms:5 existing:2 freitas:1 current:1 contextual:1 comparing:1 additive:1 informative:1 cheap:1 update:2 generative:1 selected:2 intelligence:3 mccallum:4 ith:1 geyer:1 boosting:25 node:7 five:1 along:1 direct:1 ns2:5 consists:1 combine:5 fitting:2 autocorrelation:1 privacy:1 introduce:3 pairwise:2 multi:1 freeman:2 automatically:1 bulatov:1 considering:1 increasing:3 becomes:3 xx:1 underlying:1 brushing:1 maximizes:2 moreover:1 what:1 kind:1 minimizes:1 developed:2 unified:3 impractical:1 veb:29 pseudo:6 temporal:1 safely:1 concave:1 exactly:1 demonstrates:1 grant:1 segmenting:1 before:1 engineering:1 local:6 eb:6 collect:1 discriminatory:1 graduate:1 acknowledgment:1 testing:1 implement:1 lf:5 optimizers:2 significantly:4 word:1 confidence:4 get:2 unlabeled:29 selection:17 risk:1 influence:1 map:1 crfs:24 maximizing:2 l:1 thompson:1 updated:1 annals:1 controlling:1 deploy:1 user:3 exact:2 us:1 recognition:7 expensive:1 walking:1 particularly:1 cut:1 labeled:25 observed:1 ft:16 wang:2 capture:1 worst:2 connected:1 decrease:1 highest:1 ran:1 environment:2 complexity:9 solving:4 smart:1 f2:2 learner:8 easily:1 joint:1 various:1 regularizer:3 derivation:1 train:3 jiao:4 fast:3 effective:1 monte:1 artificial:3 labeling:8 neighborhood:1 ability:2 statistic:1 syntactic:1 advantage:4 sequence:12 took:1 propose:1 maryam:1 mission:1 neighboring:4 combining:1 inducing:1 getting:1 seattle:1 convergence:1 ijcai:1 requirement:1 extending:1 empty:1 leave:3 object:1 spent:1 illustrate:1 help:1 develop:1 andrew:2 eq:11 solves:1 involves:2 blanket:1 drawback:1 human:5 nando:1 virtual:19 mann:1 require:1 f1:2 extension:2 ground:1 exp:6 major:2 optimizer:2 vary:2 torralba:1 purpose:1 estimation:10 proc:6 integrates:1 applicable:1 label:4 successfully:1 weighted:4 minimization:1 clearly:1 sensor:7 gaussian:1 modified:1 choudhury:2 avoid:1 shrinkage:2 boosted:2 surveillance:1 derived:1 consistently:2 likelihood:20 mainly:1 besag:1 posteriori:2 inference:6 helpful:2 mrfs:1 dependent:1 entire:1 hidden:3 going:2 quasi:1 compatibility:1 overall:2 classification:7 arg:3 spatial:1 constrained:1 field:15 f3:2 manually:1 tanzeem:1 yu:4 icml:3 report:2 few:2 randomly:2 simultaneously:1 densely:1 ve:4 national:1 elevator:1 murphy:1 statistician:1 friedman:1 detection:1 huge:1 message:4 light:1 stair:1 chain:4 implication:1 accurate:1 necessary:1 fox:1 tree:2 conduct:1 divide:1 instance:3 military:1 modeling:2 soft:6 maximization:1 lattice:2 cost:8 loopy:1 subset:4 too:1 dependency:5 synthetic:5 combined:2 chunk:3 person:3 international:5 standing:1 lee:3 yl:2 probabilistic:1 pool:1 connectivity:1 again:1 aaai:1 recorded:1 opposed:1 choose:1 watching:1 cognitive:1 derivative:1 li:1 potential:1 vei:66 stump:2 bfgs:1 de:1 sml:30 depends:1 performed:1 view:1 observer:1 analyze:1 start:1 complicated:1 kautz:1 contribution:1 square:4 accuracy:23 variance:1 efficiently:1 maximized:2 gathered:1 sitting:1 ensemble:2 weak:8 carlo:1 monitoring:1 straight:1 simultaneous:1 explain:1 reach:1 trevor:1 energy:2 pp:1 frequency:1 naturally:1 associated:2 wearable:3 dataset:16 mitchell:1 knowledge:1 mahdaviani:1 improves:1 dimensionality:1 segmentation:1 elder:1 higher:1 supervised:44 day:4 response:3 improved:1 wei:2 done:3 furthermore:2 jerome:1 overfit:1 hand:2 working:3 propagation:6 logistic:1 semisupervised:1 usa:1 dietterich:2 building:2 effect:2 true:2 usage:1 regularization:5 iteratively:4 during:6 self:1 please:3 criterion:1 generalized:1 mpl:5 crf:13 demonstrate:4 performs:3 temperature:1 harmonic:1 consideration:1 recently:3 physical:1 association:1 significant:3 refer:3 meal:1 automatic:1 tuning:5 fk:7 language:1 f0:1 longer:3 etc:1 add:1 showed:1 brfs:2 binary:2 meeting:1 yi:51 minimum:3 additional:1 care:2 performer:1 managing:1 converge:1 maximize:2 redundant:1 signal:1 semi:36 ii:3 reduces:4 segmented:2 ing:1 faster:1 cross:3 long:1 lin:1 divided:1 scalable:3 basic:1 regression:1 liao:3 expectation:1 iteration:11 represent:1 sleeping:1 whereas:3 addition:4 interval:4 unlike:1 recording:1 undirected:1 db:3 incorporates:1 lafferty:2 lsv:4 counting:1 bengio:2 easy:4 iii:1 affect:1 fit:1 zi:19 hastie:1 reduce:2 bottleneck:1 expression:1 effort:1 cause:1 matlab:1 useful:2 generally:1 amount:6 extensively:1 band:1 generate:1 nsf:1 s3:1 per:2 reinforces:1 tibshirani:1 broadly:1 discrete:3 key:1 threshold:1 utilize:1 graph:2 sum:2 run:3 uncertainty:1 extends:1 patch:2 utilizes:1 decision:2 appendix:2 ob:6 emergency:1 activity:22 unlabled:1 strength:2 bp:5 felt:1 nearby:1 speed:1 min:3 performing:3 relatively:1 department:1 structured:1 according:1 tv:1 combination:2 smaller:1 slightly:1 em:1 y0:2 wi:23 appealing:2 making:1 computationally:1 equation:9 mutually:1 count:1 committee:1 needed:3 fed:2 end:3 available:2 yedidia:1 alternative:2 running:3 include:3 linguistics:2 graphical:1 newton:2 scholarship:1 build:1 especially:1 approximating:1 ghahramani:1 society:1 objective:12 traditional:1 disability:1 gradient:4 thank:1 thrun:1 street:1 topic:1 collected:2 induction:1 byi:3 code:1 length:4 minimizing:1 robert:1 trace:8 guideline:1 perform:3 observation:17 datasets:12 markov:2 benchmark:1 supporting:1 relational:1 extended:1 varied:1 canada:2 namely:1 required:3 extensive:1 connection:2 extendible:1 learned:2 boost:14 hour:3 nip:3 suggested:1 below:2 usually:2 indoor:1 challenge:1 summarize:1 including:1 royal:1 video:1 belief:6 suitable:1 nki:6 indicator:4 zhu:1 improve:1 technology:2 ne:1 created:1 columbia:1 health:2 faced:1 prior:3 literature:1 l2:1 text:1 vancouver:1 highlight:1 lv:2 validation:3 teeth:1 grandvalet:2 supported:1 free:1 neighbor:4 benefit:2 dimension:6 world:8 transition:1 forward:1 author:2 ashenfelter:1 transaction:1 approximate:2 keep:2 dealing:1 ml:22 sequentially:1 uai:1 sveb:41 discriminative:3 xi:13 continuous:1 un:2 table:8 learn:5 obtaining:1 nigam:1 improving:1 schuurmans:2 complex:2 cl:4 constructing:1 domain:1 did:1 main:1 linearly:1 logitboost:6 n2:1 xu:3 intel:1 slow:1 hemp:1 pereira:1 xl:2 learns:1 british:1 down:2 minute:2 explored:1 evidence:23 intractable:2 incorporating:4 adding:1 entropy:13 likely:1 explore:1 greiner:2 expressed:1 nserc:1 applies:1 truth:1 conditional:21 goal:2 acceleration:1 change:2 hard:1 except:1 called:2 total:3 experimental:3 select:5 incorporate:3 evaluate:7 audio:1 tested:1 |
2,590 | 3,349 | Theoretical Analysis of Learning with
Reward-Modulated Spike-Timing-Dependent
Plasticity
Robert Legenstein, Dejan Pecevski, Wolfgang Maass
Institute for Theoretical Computer Science
Graz University of Technology
A-8010 Graz, Austria
{legi,dejan,maass}@igi.tugraz.at
Abstract
Reward-modulated spike-timing-dependent plasticity (STDP) has recently
emerged as a candidate for a learning rule that could explain how local learning
rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this
learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which
allow us to predict under which conditions reward-modulated STDP will be able
to achieve a desired learning effect. In particular, we can produce in this way
a theoretical explanation and a computer model for a fundamental experimental
finding on biofeedback in monkeys (reported in [1]).
1 Introduction
A major puzzle for understanding learning in biological organisms is the relationship between experimentally well-established learning rules for synapses (such as STDP) on the microscopic level
and adaptive changes of the behavior of biological organisms on the macroscopic level. Neuromodulatory systems which send diffuse signals related to reinforcements (rewards) and behavioral state
to several large networks of neurons in the brain, have been identified as likely intermediaries that
relate these two levels of learning. It is well-known that the consolidation of changes of synaptic
weights in response to pre- and postsynaptic neuronal activity requires the presence of such third
signals [2]. Corresponding spike-based learning rules of the form
dwji (t)
= cji (t)d(t),
dt
(1)
have been proposed in [3], where wji is the weight of a synapse from neuron i to neuron j, cji (t) is
an eligibility trace of this synapse which collects proposed weight changes resulting from a learning
? is a neuromodulatory signal with mean h
? (where h(t) might
rule such as STDP, and d(t) = h(t) ? h
for example represent reward prediction errors, encoded th rough the concentration of dopamine in
the extra-cellular fluid). We will consider in this article only cases where the reward prediction
error is equal to the current reward. We will refer to d(t) simply as the reward signal. Obviously
such learning scheme (1) faces a large credit-assignment problem, since not only those synapses
for which weight changes would increase the chances of future reward receive the top-down signal
d(t), but billions of other synapses too. Nevertheless the brain is able to solve this credit-assignment
problem, as has been shown in one of the earliest (but still among the most amazing) demonstrations
of biofeedback in monkeys [1]. The spiking activity of single neurons (in area 4 of the precentral
gyrus) was recorded, the current firing rate of this neuron was made visible to the monkey in the
form of an illuminated meter, and the monkey received food rewards for increases (or in alternating
trials for decreases) of the firing rate of this neuron from its average level. The monkeys learnt quite
reliably (on the time scale of 10?s of minutes) to change the firing rate of this neuron in the currently
rewarded direction 1. Obviously the existence of learning mechanisms in the brain which are able to
solve this difficult credit assignment problem is fundamental for understanding and modeling many
other learning features of the brain. We present in section 3 and 4 of this abstract a learning theory for
(1), where the eligibility trace cij (t) results from standard forms of STDP, which is able to explain
the success of the experiment in [1]. This theoretical model is confirmed by computer simulations
(see section 4.1). In section 5 we leave this concrete learning experiment and investigate under what
conditions neurons can learn through trial and error (via reward-modulated STDP) associations of
specific firing patterns to specific patterns of input spikes. The resulting theory leads to predictions
of specific parameter ranges for STDP that support this general form of learning. These were tested
through computer experiments, see 5.1.
Other interesting results of computer simulations of reward-modulated STDP in the context of neural
circuits were recently reported in [3] and [4] (we also refer to these articles for reviews of preceding
work by Seung and others).
2 Models for neurons and synaptic plasticity
(1)
(2)
(3)
The spike train of a neuron i which fires action potentials at times ti , ti , ti , . . . is formalized
P
(n)
by a sum of Dirac delta functions Si (t) = t(n) ?(t ? ti ). We assume that positive and negative
i
weight changes suggested by STDP for all pairs of pre- and postsynaptic spikes (according to the
two integrals in (2)) are collected in an eligibility trace cji (t), where the impact of a spike pairing
with the second spike at time t ? s on the eligibility trace at time t is given by some function fc (s)
for s ? 0:
Z ?
Z ?
cji (t) =
dsfc (s)
dr W (r)Sjpost (t ? s)Sipre (t ? s ? r)
0
0
Z ?
post
pre
dr W (?r)Sj (t ? s ? r)Si (t ? s) . (2)
+
0
s
In our simulations, fc (s) is a function of the form fc (s) = ?se e? ?e if s ? 0 and 0 otherwise, with
time constant ?e = 0.5s. W (r) denotes the standard exponential STDP learning window
A+ e?r/?+ , if r ? 0
W (r) =
,
(3)
?A? er/?? , if r < 0
where the positive constants A+ and A? scale the strength of potentiation and depression, ?+ and
?? are positive time constants defining the width of the positive and negative learning window, and
Sipre , Sjpost are the spike trains of the presynaptic and postsynaptic neuron respectively. The actual
weight change is the product of the eligibility trace with the reward signal as defined by equation (1).
We assume that weights are clipped at the lower boundary value 0 and an upper boundary wmax .
We use a linear Poisson neuron model whose output spike train Sjpost (t) is a realization of a Poisson
process with the underlying instantaneous firing rate Rj (t). The effect of a spike of presynaptic
neuron i at time t? on the membrane potential of neuron j is modeled by an increase in the instantaneous firing rate by an amount wji (t? )?(t ? t? ), where ? is a response kernel which models the
time course of a postsynaptic potential (PSP) elicited by an input spike. Since STDP according to
[3] has been experimentally confirmed only for excitatory synapses, we will consider plasticity only
for excitatory connections and assume that wji ? 0 for all i and ?(s) ? 0 for all s. Because the
synaptic response is scaled by the synaptic
R ? weights, we can assume without loss of generality that
the response kernel is normalized to 0 ds ?(s) = 1. In this linear model, the contributions of all
inputs are summed up linearly:
n Z ?
X
Rj (t) =
ds wji (t ? s) ?(s) Si (t ? s) ,
(4)
i=1
0
1
Adjacent neurons tended to change their firing rate in the same direction, but also differential changes of
directions of firing rates of pairs of neurons are reported in [1] (when these differential changes were rewarded).
where S1 , . . . , Sn are the n presynaptic spike trains.
3 Theoretical analysis of the resulting weight changes
We are interested in the expected weight change over some time interval T (see [5]), where the
expectation is over realizations of the stochastic input- and output spike trains as well as a stochastic
realization of the reward signal, denoted by the ensemble average h?iE
*Z
+
t+T
1
d
hwji (t + T ) ? wji (t)iE
d
?
?
=
wji (t )dt
=
wji (t)
,
(5)
T
T
dt
dt
t
T E
E
R
?1 t+T
where we used the abbreviation hf (t)iT = T
f (t? ) dt? . Using equation (1), this yields
t
Z ?
Z ?
hwji (t + T ) ? wji (t)iE
=
dr W (r)
ds fc (s) hDji (t, s, r) ?ji (t ? s, r)iT
T
0
0
Z 0
Z ?
+
dr W (r)
ds fc (s + r) hDji (t, s, r) ?ji (t ? s, r)iT ,(6)
??
|r|
where Dji (t, s, r) = hd(t)| Neuron j spikes at t ? s, and neuron i spikes at t ? s ? riE is the
average reward at time t given a presynaptic spike at time t ? s ? r and a postsynaptic spike at
time t ? s, and ?ji (t, r) = hSj (t)Si (t ? r)iE describes correlations between pre- and postsynaptic
spike timings (see [6] for the derivation). We see that the expected weight change depends on how
the correlations between the pre- and postsynaptic neurons correlate with the reward signal. If these
correlations are varying slowly with time, we can exploit the self-averaging property of the weight
vector. Analogously to [5], we can drop the ensemble average on the left hand side and obtain:
Z ?
Z ?
d
hwji (t)iT =
dr W (r)
ds fc (s) hDji (t, s, r) ?ji (t ? s, r)iT
dt
0
0
Z 0
Z ?
+
dr W (r)
ds fc (s + r) hDji (t, s, r) ?ji (t ? s, r)iT .
(7)
??
|r|
In the following, we will always use the smooth time-averaged vector hwji (t)iT , but for brevity, we
will drop the angular brackets. If one assumes for simplicity that the impact of a pre-post spike pair
on the eligibility trace is always triggered by the postsynaptic spike, one gets (see [6] for details):
Z ?
Z ?
dwji (t)
=
ds fc (s)
dr W (r) hDji (t, s, r) ?ji (t ? s, r)iT .
(8)
dt
0
??
This assumption (which is common in STDP analysis) will introduce a small error for post-before
pre spike pairs, since if a reward signal arrives at some time dr after the pairing, the weight update
will be proportional to fc (dr ) instead of fc (dr + r). For the analyses presented in this article, the
simplified equation (8) is a good approximation for the learning dynamics (see [6]). Equation (8)
shows that if the reward signal does not depend on pre- and postsynaptic spike statistics, the weight
will change according to standard STDP scaled by a constant proportional to the mean reward.
4 Application to biofeedback experiments
We now apply our theoretical approach to the biofeedback experiments by Fetz and Baker [1] that
we have sketched in the introduction. The authors showed that it is possible to increase and decrease
the firing rate of a randomly chosen neuron by rewarding the monkey for its high (respectively low)
firing rates. We assume in our model that a reward is delivered to all neurons in the simulated
recurrent network with some delay dr every time a specific neuron k in the network produces an
action potential
Z ?
d(t) =
dr Skpost (t ? dr ? r)?r (r).
(9)
0
where ?r (r) is the shape of the reward pulse corresponding to one postsynaptic
R ?spike of the reinforced neuron. We assume that the reward kernel ?r has zero mass, i.e., ??r = 0 dr ?r (r) = 0. In
our simulations, this reward kernel will have a positive bump in the first few hundred milliseconds,
and a long tailed negative bump afterwards. With the linear Poisson neuron model (see Section 2),
the correlation of the reward with pre-post spike pairs of the reinforced neuron is (see [6])
Z ?
Dki (t, s, r) = wki
dr? ?r (r? )?(s + r ? dr ? r? ) + ?r (s ? dr ) ? ?r (s ? dr ).
(10)
0
The last approximation holds if the impact of a single input spike on the membrane potential is
small. The correlation of the reward with pre-post spike pairs of non-reinforced neurons is
Z ?
?kj (t ? dr ? r? , s ? dr ? r? ) + wki wji ?(s + r ? dr ? r? )?(r)
Dji (t, s, r) =
dr? ?r (r? )
.
?j (t ? s) + wji ?(r)
0
(11)
If the contribution of a single postsynaptic potential to the membrane potential is small, we can
neglect the impact of the presynaptic spike and write
Z ?
?kj (t ? dr ? r? , s ? dr ? r? )
.
(12)
Dji (t, s, r) ?
dr? ?r (r? )
?j (t ? s)
0
Hence, the reward-spike correlation of a non-reinforced neuron depends on the correlation of this
neuron with the reinforced neuron. The mean weight change for weights to the reinforced neuron is
given by
Z ?
Z ?
d
wki (t) =
ds fc (s + dr )?r (s)
dr W (r) h?ki (t ? dr ? s, r)iT .
(13)
dt
0
??
This equation basically describes STDP with a learning rate that is proportional to the eligibility
function in the time around the reward-delay. The mean weight change of neurons j 6= k is given by
Z ?
Z ?
Z ?
?kj (t ? dr ? r? , s ? dr ? r? )
d
wji (t) = ds fc (s)
dr W (r) dr? ?r (r? )
?ji (t ? s, r)
dt
?j (t ? s)
0
??
0
T
(14)
If the output of neurons j and k are uncorrelated, this evaluates to approximately zero (see [6]).
The result can be summarized as follows. The reinforced neuron is trained by STDP. Other neurons
are trained by STDP with a learning rate proportional to their correlation with the reinforced neuron.
If a neuron is uncorrelated with the reinforced neuron, the learning rate is approximately zero.
4.1 Computer simulations
In order to test the theoretical predictions for the experiment described in the previous section, we
have performed a computer simulation with a generic neural microcircuit receiving a global reward
signal. This global reward signal increases its value every time a specific neuron (the reinforced
neuron) in the circuit fires. The circuit consists of 1000 leaky integrate-and-fire (LIF) neurons (80%
excitatory and 20% inhibitory), which are interconnected by conductance based synapses. The short
term dynamics of synapses was modeled in accordance with experimental data (see [6]). Neurons
within the recurrent circuit were randomly connected with probabilities pee = 0.08, pei = 0.08,
pie = 0.096 and pii = 0.064 where the ee, ei, ie, ii indices designate the type of the presynaptic
and postsynaptic neurons (excitatory or inhibitory). To reproduce the synaptic background activity
of neocortical neurons in vivo, an Ornstein-Uhlenbeck (OU) conductance noise process modeled
according to ([7]) was injected in the neurons, which also elicited spontaneous firing of the neurons
in the circuit with an average rate of 4Hz. In half of the neurons part of the noise was substituted
with random synaptic connections from the circuit, in order to observe how the learning mechanisms work when most of the input conductance in the neuron comes from a larger number of input
synapses which are plastic, instead of a static noise process. The function fc (t) from equation (2)
t
had the form fc (t) = ?te e? ?e if t ? 0 and 0 otherwise, with time constant ?e = 0.5s. The reward
signal during the simulation was computed according to eq. (9), with the following shape for ?r (t)
?r (t) = A+
r
t
t ? ?t+
? t ? ?r?
r ? A
.
e
e
r
?r+
?r?
(15)
The parameter values for ?r (t) were chosen such as to produce a positive reward pulse
R ? with a peak
delayed 0.5s from the spike that caused it, and a long tailed negative bump so that 0 dt ?r (t) = 0.
11
10
9
8
7
6
5
4
3
C
B
avg. weights (w/w max)
rate [Hz]
A
5
10
15
time [min]
20
0.70
0.65
before
learning
0.60
0.55
after
learning
0.50
0.45
0
5
10
15
time [min]
20
0
1
2
3 4 5
time [sec]
6
7
8
Figure 1: Computer simulation of the experiment by Fetz and Baker [1]. A) The firing rate of the
reinforced neuron (solid line) increases while the average firing rate of 20 other randomly chosen
neurons in the circuit (dashed line) remains unchanged. B) Evolution of the average synaptic weight
of excitatory synapses connecting to the reinforced neuron (solid line) and to other neurons (dashed
line). C) Spike trains of the reinforced neuron at the beginning and at the end of the simulation.
For values of other model parameters see [6]. The learning rule (1) was applied to all synapses in the
circuit which have excitatory presynaptic and postsynaptic neurons. The simulation was performed
for 20 min simulated biological time with a simulation time step of 0.1ms.
Fig. 1 shows that the firing rate and synaptic weights of the reinforced neuron increase within a few
minutes of simulated biological time, while those of the other neurons remain largely unchanged.
Note that this reinforcement learning task is more difficult than that of the first computer experiment
of [3], where postsynaptic firing within 10 ms after presynaptic firing of a randomly chosen synapse
was rewarded, since the relationship between synaptic activity (and hence with STDP) is less direct
in this setup. Whereas a very low spontaneous firing rate of 1 Hz was required in [3], this simulation
shows that reinforcement learning is also feasible at rate levels which correspond to those reported
in [1].
5 Rewarding spike-timings
In order to explore the limits of reward-modulated STDP, we have also investigated a substantially
more demanding reinforcement learning scenario. The reward signal d(t) was given in dependence
on how well the output spike train Sjpost of the neuron j matched some rather arbitrary spike train S ?
that was produced by some neuron that received the same n input spike trains as the trained neuron
with arbitrary weights w? = (w1? , . . . , wn? )T , wi? ? {0, wmax }, but in addition n? ? n further
spike trains Sn+1 , . . . , Sn? with weights wi? = wmax . This setup provides a generic reinforcement
learning scenario, when a quite arbitrary (and not perfectly realizable) spike output is reinforced, but
simultaneously the performance of the learner can be evaluated quite clearly according to how well
its weights w1 , . . . , wn match those of the target neuron for those n input spike trains which both of
them receive. The reward d(t) at time t is given by
Z ?
d(t) =
dr ?(r)Sjpost (t ? dr )S ? (t ? dr ? r),
(16)
??
R?
where the function ?(r) with ?
? = ?? ds ?(s) > 0 describes how the reward signal depends
on the time difference between a postsynaptic spike and a target spike and dr > 0 is the delay
of the reward. Our theoretical analysis below suggests that this reinforcement learning task can
in principle be solved by reward-modulated STDP if some constraints are fulfilled. The analysis
also reveals which reward kernels ? are suitable for this learning setup. The reward correlation for
synapse i is (see [6])
Z ?
Dji (t, s, r) =
dr? ?(r? ) ?jpost (t ? dr ) + ?(s ? dr ) + wji (s + r ? dr )?(s + r ? dr )
??
[? ? (t ? dr ? r? ) + wi? ?(s + r ? dr ? r? )] , (17)
where ?jpost (t) = hSjpost (t)iE denotes the mean rate of the trained neuron at time t, and ? ? (t) =
hS ? (t)iE denotes the mean rate of the target spike train at time t. Since weights are changing very
slowly, we have wji (t ? s ? r) = wji (t). In the following, we will drop the dependence of wji on
t for brevity. For simplicity, we assume that input rates are stationary and uncorrelated. In this case
(since the weights are changing slowly), also the correlations between inputs and outputs can be
assumed stationary, ?ji (t, r) = ?ji (r). We assume that the eligibility function fc (dr ) ? fc (dr + r)
if |r| is on a time scale of a PSP, the learning window, or the reward kernel, and that dr is large
compared to these time scales. Then, for uncorrelated Poisson input spike trains of rate ?ipre and the
linear Poisson neuron model, the weight change at synapse ji is given by
dwji (t)
? + wji W
??
? ?
?f?c ? ? ?ipre ?jpost ?jpost W
dt
? + wji W
? ? ? ? + ? ? wji + wi? ? post
+?
?fc (dr )?ipre ?jpost W
j
Z ?
Z ?
+fc (dr )wi? ?ipre ?jpost
dr W (r)?? (r) + wji
dr W (r)?(r)?? (r)
??
??
Z
?
pre post ?
?
??
+fc (dr )wi wji ?i
?j W + wji W
dr ?(r)?? (r),
(18)
0
R?
R
R
? = ? dr W (r), ?? (r) = ? dr? ?(r? )?(r ? r? ) is the conwhere f?c = 0 dr fc (r), W
??
??
volution
R ?of the reward kernel with the PSP is the integral over the STDP learning window, and
?? =
W
dr ?(r)W (r).
??
We will now bound the expected weight change for synapses ji with wi? = wmax and for synapses
?
jk with wjk
= 0. In this way we can derive conditions for which the expected weight change for the
former synapses is positive, and that for the latter type is negative. First, we assume that the integral
over the reward kernel is positive. In this case, the weight change is negative for synapses i with
? > wji W
? ? . In the worst case, wji is wmax and ? post
wi? = 0 if and only if ?ipre > 0, and ??jpost W
j
post
is small. We have to guarantee some minimal output rate ?min
such that even if wji = wmax , this
inequality is fulfilled. This could be guaranteed by some noise current. For synapses i with wi? =
wmax , we obtain two more conditions (see [6] for a derivation). The conditions are summarized in
inequalities (19)-(21). If these inequalities are fulfilled and input rates are positive, then the weight
vector converges on average from any initial weight vector to w? .
post ?
??
??min
W > wmax W
(19)
Z ?
Z ?
post ?
dr ?(r)?? (r)
(20)
dr W (r)?(r)?? (r) ? ??max
W
0
??
? post ?
Z ?
? ?max fc
??
post
??
dr W (r)?? (r) > ?W
?
+
+ ? ? + ?max
,
(21)
wmax fc (dr ) wmax
??
post
where ?max
is the maximal output rate. The second condition is less severe, and should be easily
fulfilled in most setups. If this is the case, the first condition (19) ensures that weights with w? = 0
are depressed while the third condition (21) ensures that weights with w? = wmax are potentiated.
Optimal reward kernels: From condition
R ? (21), we can deduce optimal reward kernels ?. The
kernel should be such that the integral ?? dr W (r)?? (r) is large, while the integral over ? is small
(but positive). Hence, ?? (r) should be positive for r > 0 and negative for r < 0. In the following
experiments, we use a simple kernel which satisfies the aforementioned constraints:
(
?
? t?t
? t?t??
?
A?+ (e ?1 ? e ?2 ) , if t ? t? ? 0
?(r) =
t?t?
t?t?
?
?
?A?? (e ?1 ? e ?2 ) , otherwise
where A?+ and A?? are positive scaling constants, ?1? and ?2? define the shape of the two doubleexponential functions the kernel is composed of, and t? defines the offset of the zero-crossing from
the origin. The optimal offset from the origin is negative and in the order of tens of milliseconds
for usual PSP-shapes ?. Hence, reward is positive if the neuron spikes around the target spike or
somewhat later, and negative if the neuron spikes much too early.
5.1 Computer simulations
In the computer simulations we explored the learning rule in a more biologically realistic setting,
where we used a leaky integrate-and-fire (LIF) neuron with input synaptic connections coming from
1.0
average weights
(w/wmax)
A
B
before learning
0.8
?
target S
(= rewarded
spike times)
0.6
realizable part?
of target S
0.4
0.2
after learning
0.0
0
30
60
time [min]
90
120
0
1
2
time [sec]
3
4
Figure 2: Reinforcement learning of spike times. A) Synaptic weight changes of the trained LIF
neuron, for 5 different runs of the experiment. The curves show the average of the synaptic weights
that should converge to wi? = 0 (dashed lines), and the average of the synaptic weights that should
converge to wi? = wmax (solid lines) with a different shading for each simulation run. B) Comparison of the output of the trained neuron before (upper trace) and after learning (lower trace; the
same input spike trains and the same noise inputs were used before and after training for 2 hours).
The second trace from above shows those spike times which are rewarded, the third trace shows the
target spike train without the additional noise inputs.
1.0
-0.5
B
0.5
?w (w?=0)
?w (w?=wmax)
A
0.5
0.0
0.0
-0.5
-1.0
Exp.No.
1
2
3
4
5
6
Figure 3: Predicted average weight
change (black bars) calculated
from equation (18), and the estimated average weight change
(gray bars) from simulations, presented for 6 different experiments
with different parameter settings
(see Table 1).2 A) Weight change
values for synapses with wi? =
wmax . B) Weight change values
for synapses with wi? = 0. Cases
where the constraints are not fulfilled are shaded with gray color.
a generic neural microcircuit composed of 1000 LIF neurons. The synapses were conductance
based exhibiting short term facilitation and depression. The trained neuron and the arbitrarily given
neuron which produced the target spike train S ? (?target neuron?) both were connected to the same
randomly chosen, 100 excitatory and 10 inhibitory neurons from the circuit. The target neuron
had 10 additional excitatory input connections (these weights were set to wmax ), not accessible to
the trained neuron. Only the synapses of the trained neuron connecting from excitatory neurons
were set to be plastic. The target neuron had a weight vector with wi? = 0 for 0 ? i < 50 and
wi? = wmax for 50 ? i < 110. The generic neural microcircuit from which the trained and
the target neurons receive the input had 80% excitatory and 20% inhibitory neurons interconnected
randomly with a probability of 0.1. The neurons received background synaptic noise as modeled in
[7], which caused spontaneous activity of the neurons with an average firing rate of 6.9Hz. During
the simulations, we observed a firing rate of 10.6Hz for the trained, and 19Hz for the target neuron.
The reward was delayed by 0.5s, and we used the same eligibility trace function fc (t) as in the
simulations for the biofeedback experiment (see [6] for details). The simulations were run for two
hours simulated biological time, with a simulation time step of 0.1ms. We performed 5 repetitions
of the experiment, each time with different randomly generated circuits and different initial weight
values for the trained neuron. In each of the 5 runs, the average synaptic weights of synapses with
wi? = wmax and wi? = 0 approach their target values, as shown in Fig. 2A. In order to test how
2
The values in the figure are calculated as ?w =
hdw/dtitsim
wmax /2
w(tsim )?w(0)
wmax /2
for the simulations, and with ?w =
for the predicted value. w(t) is the average weight over synapses with the same value of w? .
Ex. ?? [ms]
1
10
2
7
3
20
4
7
5
10
6
25
A
post
wmax ?min
[Hz] A+ 106 A?
?+ ,?2? [ms]
+
0.012
10
16.62 1.05 20,20
0.020
5
11.08 1.02 15,16
0.010
6
5.54 1.10 25,40
0.020
5
11.08 1.07 25,16
0.015
6
20.77 1.10 25,20
0.005
3
13.85 1.01 25,20
A?+ tsim [h]
3.34
5
4.58
10
1.46
16
4.67
13
3.75
3
3.34
13
Table 1: Parameter values used
for the simulations in Figure
3. Both cases where the constraints are satisfied and not satisfied were covered. PSPs were
modeled as ?(s) = e(?s/?? ) /?? .
closely the learning neuron reproduces the target spike train S ? after learning, we have performed
additional simulations where the same spiking input SI is applied to the learning neuron before and
after we conducted the learning experiment (results are reported in Fig. 2B).
The equations in section 5 define a parameter space for which the trained neuron can learn the target
synapse pattern w? . We have chosen 6 different parameter values encompassing cases with satisfied
and non-satisfied constraints, and performed experiments where we compare the predicted average
weight change from equation (18) with the actual average weight change produced by simulations.
Figure 3 summarizes the results. In all 6 experiments, the sufficient conditions (19)-(21) were correct. In those cases where these conditions were not met, the weight moved in the opposite direction,
suggesting that the theoretically sufficient conditions (19)-(21) might also be necessary.
6 Discussion
We have developed in this paper a theory of reward-modulated STDP. This theory predicts that reinforcement learning through reward-modulated STDP is also possible at biologically more realistic
spontaneous firing rates than the average rate of 1 Hz that was used (and argued to be needed) in the
extensive computer experiments of [3]. We have also shown both analytically and through computer
experiments that the result of the fundamental biofeedback experiment in monkeys from [1] can be
explained on the basis of reward-modulated STDP. The resulting theory of reward-modulated STDP
makes concrete predictions regarding the shape of various functions (e.g. reward functions) that
would optimally support the speed of reward-modulated learning for the generic (but rather difficult) learning tasks where a neuron is supposed to respond to input spikes with specific patterns of
output spikes, and only spikes at the right times are rewarded. Further work (see [6]) shows that
reward-modulated STDP can in some cases replace supervised training of readout neurons from
generic cortical microcircuit models.
Acknowledgment: We would like to thank Gordon Pipa and Matthias Munk for helpful discussions.
Written under partial support by the Austrian Science Fund FWF, project # P17229, project #
S9102 and project # FP6-015879 (FACETS) of the European Union.
References
[1] E. E. Fetz and M. A. Baker. Operantly conditioned patterns of precentral unit activity and correlated
responses in adjacent cells and contralateral muscles. J Neurophysiol, 36(2):179?204, Mar 1973.
[2] C. H. Bailey, M. Giustetto, Y.-Y. Huang, R. D. Hawkins, and E. R. Kandel. Is heterosynaptic modulation
essential for stabilizing Hebbian plasticity and memory? Nature Reviews Neuroscience, 1:11?20, 2000.
[3] E. M. Izhikevich. Solving the distal reward problem through linkage of STDP and dopamine signaling.
Cerebral Cortex Advance Access, January 13:1?10, 2007.
[4] R. V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
Neural Computation, 6:1468?1502, 2007.
[5] W. Gerstner and W. M. Kistler. Spiking Neuron Models. Cambridge University Press, Cambridge, 2002.
[6] R. Legenstein, D. Pecevski, and W. Maass. Theory and applications of reward-modulated spike-timingdependent plasticity. in preparation, 2007.
[7] J.M. Fellous A. Destexhe, M. Rudolph and T.J. Sejnowski. Fluctuating synaptic conductances recreate in
vivo-like activity in neocortical neurons. Neuroscience, 107(1):13?24, 2001.
| 3349 |@word h:1 trial:2 pulse:2 pipa:1 simulation:25 solid:3 shading:1 initial:2 current:3 si:5 written:1 visible:1 realistic:2 plasticity:7 shape:5 analytic:1 drop:3 update:1 fund:1 stationary:2 half:1 beginning:1 short:2 provides:2 direct:1 differential:2 pairing:2 consists:1 behavioral:1 introduce:1 theoretically:1 expected:4 behavior:1 brain:4 food:1 actual:2 window:4 project:3 underlying:1 baker:3 circuit:10 mass:1 wki:3 matched:1 what:1 substantially:1 monkey:7 developed:1 finding:1 guarantee:1 every:2 legi:1 ti:4 scaled:2 unit:1 positive:13 before:6 timing:5 local:1 accordance:1 limit:1 firing:20 modulation:2 approximately:2 might:2 black:1 collect:1 suggests:1 shaded:1 range:1 averaged:1 acknowledgment:1 union:1 signaling:1 area:1 pre:11 get:1 context:1 send:1 stabilizing:1 formalized:1 simplicity:2 rule:8 facilitation:1 hd:1 spontaneous:4 target:16 origin:2 crossing:1 jk:1 predicts:1 observed:1 solved:1 worst:1 graz:2 ensures:2 connected:2 readout:1 decrease:2 reward:56 seung:1 dynamic:2 trained:13 depend:1 solving:1 volution:1 learner:1 basis:1 neurophysiol:1 easily:1 various:1 derivation:2 train:17 sejnowski:1 tsim:2 quite:3 emerged:1 encoded:1 solve:2 whose:1 larger:1 otherwise:3 statistic:1 rudolph:1 delivered:1 obviously:2 triggered:1 matthias:1 interconnected:2 product:1 maximal:1 coming:1 relevant:1 realization:3 achieve:1 supposed:1 moved:1 dirac:1 wjk:1 billion:1 produce:3 leave:1 converges:1 derive:1 recurrent:2 amazing:1 received:3 eq:1 predicted:3 come:1 pii:1 met:1 exhibiting:1 direction:4 s9102:1 closely:1 correct:1 stochastic:2 kistler:1 munk:1 argued:1 potentiation:1 biological:5 p17229:1 designate:1 hold:1 around:2 credit:3 hawkins:1 stdp:27 exp:1 puzzle:1 predict:1 pecevski:2 bump:3 major:1 early:1 pee:1 intermediary:1 currently:1 repetition:1 tool:1 rough:1 clearly:1 behaviorally:1 always:2 rather:2 varying:1 earliest:1 realizable:2 helpful:1 dependent:3 reproduce:1 interested:1 sketched:1 among:1 aforementioned:1 denoted:1 summed:1 lif:4 equal:1 future:1 others:1 gordon:1 few:2 randomly:7 composed:2 simultaneously:1 delayed:2 fire:4 conductance:5 investigate:1 severe:1 bracket:1 arrives:1 integral:5 partial:1 necessary:1 biofeedback:6 desired:1 precentral:2 theoretical:8 minimal:1 modeling:1 facet:1 assignment:3 contralateral:1 hundred:1 delay:3 conducted:1 too:2 optimally:1 reported:5 learnt:1 fundamental:3 peak:1 ie:7 accessible:1 rewarding:2 receiving:1 analogously:1 connecting:2 concrete:2 w1:2 recorded:1 satisfied:4 huang:1 slowly:3 dr:61 suggesting:1 potential:8 summarized:2 sec:2 caused:2 igi:1 depends:3 ornstein:1 performed:5 later:1 wolfgang:1 hf:1 elicited:2 vivo:2 contribution:2 largely:1 ensemble:2 yield:1 reinforced:15 correspond:1 plastic:2 produced:3 basically:1 confirmed:2 explain:2 synapsis:21 tended:1 synaptic:17 evaluates:1 static:1 treatment:1 austria:1 color:1 ou:1 dt:11 supervised:1 response:5 synapse:6 rie:1 evaluated:1 microcircuit:4 mar:1 generality:1 angular:1 correlation:10 d:10 hand:1 wmax:21 ei:1 defines:1 gray:2 izhikevich:1 effect:2 normalized:1 evolution:1 hence:4 former:1 analytically:1 alternating:1 maass:3 distal:1 adjacent:2 during:2 width:1 eligibility:9 self:1 timingdependent:1 m:5 neocortical:2 hwji:4 instantaneous:2 recently:2 common:1 spiking:4 ji:11 dji:4 cerebral:1 association:1 organism:2 refer:2 cambridge:2 neuromodulatory:2 depressed:1 had:4 access:1 cortex:1 deduce:1 showed:1 rewarded:6 scenario:2 inequality:3 success:1 arbitrarily:1 wji:24 muscle:1 additional:3 somewhat:1 preceding:1 florian:1 converge:2 dashed:3 signal:15 ii:1 afterwards:1 rj:2 hebbian:1 smooth:1 match:1 long:2 post:15 impact:4 prediction:5 austrian:1 expectation:1 poisson:5 dopamine:2 represent:1 kernel:13 uhlenbeck:1 cell:1 receive:3 background:2 whereas:1 addition:1 interval:1 macroscopic:1 extra:1 hz:8 fwf:1 ee:1 presence:1 destexhe:1 wn:2 psps:1 identified:1 perfectly:1 opposite:1 regarding:1 recreate:1 cji:4 linkage:1 action:2 depression:2 se:1 covered:1 amount:1 ten:1 gyrus:1 millisecond:2 inhibitory:4 delta:1 fulfilled:5 estimated:1 neuroscience:2 write:1 nevertheless:1 changing:2 fp6:1 sum:1 run:4 injected:1 respond:1 heterosynaptic:1 clipped:1 legenstein:2 summarizes:1 scaling:1 illuminated:1 ki:1 bound:1 guaranteed:1 activity:7 strength:1 constraint:5 diffuse:1 speed:1 min:7 according:6 membrane:3 psp:4 describes:3 remain:1 postsynaptic:15 wi:17 biologically:2 s1:1 explained:1 equation:9 remains:1 mechanism:2 needed:1 end:1 apply:1 observe:1 fluctuating:1 generic:6 bailey:1 existence:1 top:1 denotes:3 assumes:1 tugraz:1 neglect:1 exploit:1 unchanged:2 spike:57 concentration:1 dependence:2 usual:1 microscopic:1 thank:1 simulated:4 presynaptic:8 collected:1 cellular:1 modeled:5 relationship:2 index:1 demonstration:1 difficult:3 pie:1 cij:1 robert:1 setup:4 relate:1 trace:11 negative:9 fluid:1 reliably:1 pei:1 dejan:2 upper:2 neuron:90 potentiated:1 january:1 defining:1 arbitrary:3 pair:6 required:1 extensive:1 connection:4 established:1 hour:2 able:4 suggested:1 bar:2 below:1 pattern:5 max:5 memory:1 explanation:1 suitable:1 demanding:1 scheme:1 technology:1 sn:3 kj:3 review:2 understanding:2 meter:1 loss:1 encompassing:1 interesting:1 limitation:1 proportional:4 integrate:2 sufficient:2 article:4 principle:1 uncorrelated:4 course:1 excitatory:10 consolidation:1 last:1 side:1 allow:1 institute:1 fetz:3 face:1 leaky:2 boundary:2 curve:1 calculated:2 cortical:1 author:1 made:1 adaptive:2 reinforcement:9 simplified:1 avg:1 far:1 correlate:1 sj:1 global:2 reproduces:1 reveals:1 assumed:1 tailed:2 table:2 learn:2 nature:1 hsj:1 correlated:1 investigated:1 complex:1 european:1 gerstner:1 substituted:1 linearly:1 doubleexponential:1 noise:7 neuronal:1 fig:3 exponential:1 kandel:1 candidate:1 third:3 down:1 minute:2 fellous:1 specific:6 er:1 offset:2 dki:1 explored:1 essential:1 te:1 conditioned:1 fc:23 simply:1 likely:1 explore:1 chance:1 satisfies:1 abbreviation:1 replace:1 feasible:1 change:28 experimentally:2 averaging:1 experimental:2 support:4 latter:1 modulated:15 brevity:2 preparation:1 tested:2 ex:1 |
2,591 | 335 | Stereopsis by a Neural Network
Which Learns the Constraints
Alireza Khotanzad and Ying-Wung Lee
Image Processing and Analysis Laboratory
Electrical Engineering Department
Southern Methodist University
Dallas, Texas 75275
Abstract
This paper presents a neural network (NN) approach to the problem of
stereopsis. The correspondence problem (finding the correct matches
between the pixels of the epipolar lines of the stereo pair from amongst all
the possible matches) is posed as a non-iterative many-to-one mapping . A
two-layer feed forward NN architecture is developed to learn and code this
nonlinear and complex mapping using the back-propagation learning rule
and a training set. The important aspect of this technique is that none of
the typical constraints such as uniqueness and continuity are explicitly
imposed. All the applicable constraints are learned and internally coded
by the NN enabling it to be more flexible and more accurate than the
existing methods. The approach is successfully tested on several randomdot stereograms. It is shown that the net can generalize its learned mapping to cases outside its training set. Advantages over the Marr-Poggio
Algorithm are discussed and it is shown that the NN performance is superIOr.
1 INTRODUCTION
Three-dimensional image processing is an indispensable property for any advanced
computer vision system. Depth perception is an integral part of 3-d processing. It
involves computation of the relative distances of the points seen in the 2-d images
to the imaging device. There are several methods to obtain depth information. A
common technique is stereo imaging. It uses two cameras displaced by a known
distance to generate two images of the same scene taken from these two different
viewpoints. Distances to objects can be computed if corresponding points are
identified in both frames. Corresponding points are two image points which
correspond to the same object point in the 3-d space as seen by the left and the
right cameras, respectively. Thus, solving the so called "correspondence problem"
327
328
Khotanzad and Lee
is the essential stage of depth perception by stereo imaging.
Many computational approaches to the correspondence problem have been studied
in the past. An exhaustive review of such techniques is best left to a survey articles by Dhond and Aggarwal (1989). Common to all such techniques is the
employment of some constraints to limit computational requirement and also
reduce the ambiguity. They usually consist of strict rules that are fixed a priori
and are based on a rough model of the surface to-be-solved. Unfortunately,
psychophysical evidence of human stereopsis suggest that the appropriate constraints are more complex and more flexible to be characterized by simple fixed
rules.
In this paper, we suggest a novel approach to the stereo correspondence problem
via neural networks (NN). The problem is cast into a mapping framework and
subsequently solved by a NN which is especially suited to such tasks. An important aspect of this approach is that the appropriate constraints are automatically
learned and generalized by the net resulting in a flexible and more accurate model.
The iterative algorithm developed by Marr and Poggio (1976) for can be regarded
as a crude neural network approach with no embedded learning. In fact, the initial stages of the proposed technique follow the same initial steps taken in that
algorithm. However, the later stages of the two algorithms are quite distinct with
ours involving a learning process and non-iterative operation.
There have been other recent attempts to solve the correspondence problem by
neural networks. Among these are O'Toole (1989), Qian and Sejnowski (1988),
Sun et al. (1987), and Zhou and Chellappa (1988). These studies use different
approaches and topologies from the one used in this paper.
2
DESCRIPTION OF THE APPROACH
The proposed approach poses the correspondence problem as a mapping problem
and uses a special kind of NN to learn this mapping. The only constraint that is
explicitly imposed is the "epipolar" constraint. It states that the match of a point
in row m of one of the two images can only be located in row m of the other
image. This helps to reduce the computation by restricting the search area.
2.1 CORRESPONDENCE PROBLEM AS A MAPPING PROBLEM
The initial phase of the procedure involves casting the correspondence problem as
a many to one mapping problem. To explain the method, let us consider a very
simple problem involving one row (epipolar line) of a stereo pair. Assume 6 pixel
wide rows and take the specific example of [001110] and [111010] as left and right
image rows respectively. The task is to find the best possible match between these
two strings which in this case is [1110].
The process starts by forming an "initial match matrix". This matrix includes all
possible matches between the pixels of the two rows. Fig. 1 illustrates this matrix
for the considered example. Each 1 indicates a potential match. However only a
few of these matches are correct. Thus, the main task is to distinguish the correct
matches which are starred from the false ones.
Stereopsis by a Neural Network Which Learns the Constraints
To distinguish the correct matches from the false ones, Marr and Poggio (1976)
imposed two constraints on the correspondences; (1) uniqueness- that there should
be a one-to-one correspondence between features in the two eyes, and (2) smoothness - that surfaces should change smoothly in depth. The first constraint means
that only one element of the match matrix may have a value of 1 along each horizontal and vertical direction. The second constraint translates into a tendency
for the correct matches to spread along the 45? directions. These constraints are
implemented through weighted connections between match matrix elements. The
uniqueness constraint is modeled by inhibitory (negative) weights along the
horizontal/vertical directions. The smoothness constraint gives rise to excitatory
(positive) weights along 45? lines. The connections from the rest of elements
receive a zero (don't care) weight. Using fixed excitatory and inhibitory constants,
they progressively eliminate false correspondences by applying an iterative algorithm.
The described row wise matching does not consider the vertical dependency of pixels in 2-d images. To account for inter-row relationships, the procedure is
extended by stacking up the initial match matrices of all the rows to generate a
three-dimensional "initial match volume", as shown in Fig. 2. Application of the
two mentioned constraints extends the 2-d excitatory region described above to a
45 ? oriented plane in the volume while the inhibitory region remains on the 2-d
plane of the row-wise match. Since depth changes usually happen within a locality, instead of using the complete planes, a subregion of them around each element
is selected. Fig. 3 shows an example of such a neighborhood. Note that the considered excitatory region is a circular disc portion of the 45? plane. The choice of
the radius size (three in this case) is arbitrary and can be varied. A similar iterative technique is applied to the elements of the initial match volume in order to
eliminate incompatible matches and retain the good ones.
There are several serious difficulties with the Marr-Poggio algorithm. First, there
is no systematic method for selection of the best values of the
excitatory /inhibitory weights. These parameters are usually selected by trial and
error. Moreover, a set of weights that works well for one case does not necessarily
yield good results for a different pair of images. In addition, utilization of constant weights has no analogy in biological vision systems. Another drawback
regards the imposition of the two previously mentioned constraints which are
based on assumptions about the form of the underlying scene. However, psychophysical evidence suggests that the stereopsis constraints are more complex and
more flexible than can be characterized by simple fixed rules.
The view that we take is that the described process can be posed as a mapping
operation from the space of "initial match volume" to the space of "true match
volume". Such a transformation can be considered as a one-shot (non-iterative)
mapping from the initial matches to the final ones. This is a complex non-linear
relationship which is very difficult to model by conventional methods. However, a
neural net can learn, and more importantly generalize it.
2.2
NEURAL NETWORK ARCHITECTURE
The described mapping is a function of the elements in the initial match volume.
This can be expressed as:
329
330
Khotanzad and Lee
t(XI' X2, xs) = f (i(a, b, c)
I (a,
b, c) ( S)
where
t(Xb X2, xs) =
f=
i(a, b, c) =
S=
state of the node located at coordinate (Xli X2, xs) In the
true match volume.
the nonlinear mapping function.
state of the node located at coordinate (a, b, c) in the initial match volume.
A set of three-dimensional coordinates including (Xl, X2, xs)
and those of its neighbors in a specified neighborhood.
In such a formulation, if f is known, the task is complete. A NN is capable of
learning f through examining a set of examples involving initial matches and their
corresponding true matches. The learned function will be coded in a distributive
manner as the learned weights of the net.
Note that this approach does not impose any constraints on the solution. No a
priora" excitatory/inhibitory assignments are made. Only a unified concept of a
neighboring region, S, which influences the disparity computation is adopted. The
influence of the elements in S on the solution is learned by the NN. This means
that all the appropriate constraints are automatically learned.
Unlike the Marr-Poggio approach, the NN formulation allows us to consider any
shape or size for the neighborhood, S. Although in discussions of next sections we
use a Marr-Poggio type neighborhood as shown in Fig. 3, there is no restriction on
this. In this work we used this S in order to be able to compare our results with
those of Marr-Poggio. In a previous study (Khotanzad & Lee (1990)) we used a
standard fully connected multi-layer feed-forward NN to learn f. The main problem with that net is the ad hoc selection of the number of hidden nodes. In this
study, we use another layered feed-forward neural net termed "sparsely connected
NN with augmented inputs" which does not suITer from this problem. It consists of
an input layer, an output layer, and one "hidden layer. The hidden layer nodes
and the output node have a Sigmoid non-linearity transfer function. The inputs
to this net consist of the state of the considered element in the initial match
volume along with states of those in its locality as will be described. The response
of the output node is the computed state of the considered element of the initial
match volume in the true match volume. The number of hidden nodes are
decided based on the shape and size of the selected neighborhood, S , as described
in the example to follow. This net is not a fully connected net and each hidden
node gets connected to a subset of inputs. Thus the term "sparsely connected" is
used.
To illustrate the suggested net, let us use the S of Fig. 3. In this case, each element in the initial match volume gets affected by 24 other elements shown by circles and crosses in the figure. Our suggested network for such an S is shown in
Fig. 4. It has 625 inputs, 25 hidden nodes and one output node. Each hidden
node is only connected to one set of 25 input nodes. The 625 inputs consist of 25
sets of 25 elements of the initial match volume. Let us denote these sets by
II, 12, ... , 125 respectively. The first set of 25 inputs consists of the state of the
element of the initial match volume whose final state is sought along with those of
Stereopsis by a Neural Network Which Learns the Constraints
its 24 neighbors. Let us denote this node and its neighbors by t and st =
respectively. Then 11 = {t, st}. The second set is composed of
st
the same type of information for neighbor sr In other words 12 = {si, s l}.
13 ? " ' , 125 are made similarly. So in general
si, sit "', Si4
_
t
8Jt
Ij-{sj, S},
j = 2, 3, ... , 25.
Note that there is a good degree of overlap among these 625 inputs. However,
these redundant inputs are processed separately in the hidden layer as explained
later. Due to the structure of this input, it is referred to as "augmented input".
The hidden layer consists of 25 nodes, each of which is connected to only one of
the 25 sets of inputs through weights to be learned. Thus, each node of the hidden layer processes the result of evolution of one of the 25 input sets. The effects
of processing these 25 evolved sets would then be integrated at the single output
node through the connection weights between the hidden nodes and the output
node. The output node then computes the corresponding final state of the considered initial match element.
Training this net is equivalent to finding proper weights for all of its connections
as well as thresholds associated with the nodes. This is carried out by the backpropagation learning algorithm (Rumelhart et. al (1986)). Again note that all the
weights used in this scheme are unknown and need to be computed through the
learning procedure with the training set. Thus, the concept of a priori excitatory
and inhibitory labeling is not used.
3 EXPERUWENTALSTUDY
The performance of the proposed neural network approach is tested on several
random-dot stereograms. A random dot stereogram consists of a pair of similar
structural images filled with randomly generated black and white dots, with some
regions of one of the images shifted to either left or right relative to the other
image. When viewed through a stereoscope, a human can perceive the shifted
structures as either floating upward or downward according to their relative
disparities. Stereograms with 50% density (i.e. half black, half white) are used.
Six 32x32 stereograms with varying disparities are used to teach the network.
The actual disparity maps (floating surfaces) of these are shown in Fig. 5. Each
stereogram contains three different depth levels (disparity regions) represented by
different gray levels. Therefore, six three-dimensional initial match volumes and
their six corresponding true match volumes comprise the training set for the NN.
Each initial match volume and its corresponding true match volume contain 323
input-output pairs. Since six stereo grams are considered, a total of 6x32 3 inputoutput pairs are available for training.
The performance of the trained net is tested on several random-dot stereograms.
Fig. 5 shows the results for the same data the net is trained with. In addition the
performance was tested on other stereo grams that are different from the training
set. The considered differences include: the shape of the disparity regions, size of
the image, disparity levels, and addition of noise to one image of the pair. These
cases are not presented here due to space limitation. We can report that all of
them yielded very good results.
331
332
Khotanzad and Lee
In Fig. 5, the results obtained using the Marr-Poggio algorithm are also shown for
comparison. Even though it was tried to find the best feed backs for Man-Poggio
through trial and error, the NN outperformed it in all cases in terms of number of
error pixels in the resulting disparity map.
4
CONCLUSION
In this paper, a neural network approach to the problem of stereopsis was discussed. A multilayer feed-forward net was developed to learn the mapping that
retains the correct matches between the pixels of the epipolar lines of the stereo
pair from amongst all the possible matches. The only constraint that is explicitly
imposed is the "epipolar" constraint. All the other appropriate constraints are
learned by example and coded in the nets in a distributed fashion. The net learns
by examples of stereo pairs and their corresponding depth maps using the backpropagation learning rule. Performance was tested on several random-dot stereograms and it was shown that the learning is generalized to cases outside the training. The net performance was also found to be superior to Marr-Poggio algorithm.
Acknow ledgements
This work was supported in part by DARPA under Grant MDA-903-86-C-0182
References
Dhond, U. R. & Aggarwal, J. K. (1989), "Structure from stereo - A review," IEEE
Trans. SMC, vol. 19, pp. 1489-1510.
Drumheller, M. & Poggio, T. (1986), "On parallel stereo," Proc. IEEE Inti. Conf.
on Robotics and Automation, vol. 3, pp. 1439-1448.
Khotanzad, A. & Lee, Y. W. (1990), "Depth Perception by a Neural Network,"
IEEE Midcon/90 Conf. Record, Dallas, Texas, pp. 424-427, Sept. 11-13.
Marr, D. & Poggio, T. (1976), "Cooperative computation of stereo disparity," Science, 194, pp. 238-287.
O'Toole, A. J. (1989), "Structure from stereo by associative learning of the constraints," Percept?on, 18, pp. 767-782.
Poggio, T. (1984), "Vision by man and machine," Scientific American, vol. 250,
pp. 106-116, April.
Qiang, N. & Sejnowski, T. J. (1988), "Learning to solve random-dot stereo grams
of dense and transparent surfaces with recurrent backpropagation," in Touretzky
& Sejnowski (Eds.), Proceedings of the 1988 Connectionist Models, pp. 435-444,
Morgan Kaufmann Publishers.
Rumelhart, D. E., Hinton G. E., and Williams R. J. (1986), "Learning internal
representations by error propagation," in D.E. Rumelhart & J.1. McClelland
(Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. vol. 1: Foundations, MIT Press.
Stereopsis by a Neural Network Which Learns the Constraints
333
Sun, G. Z., Chen, H. H., Lee, Y. C. (1987), "Learning stereopsis with neural networks," Proc. IEEE First Inti. Con/. on Neural Networks, San Diego, CA, pp.
345-355, June.
Zhou, Y. T. and Chellappa, R. (1988), "Stereo matching using a neural network,"
Proc. IEEE International Con!. Acoustics, Speech, and Signal Processing,
ICASSP-88, New York, pp. 940-943, April 11-14.
----~~~------~~
aight
o
1
1
1
o
1
1
1
1
1
1*
1
1
1
1
1*
1
1
1*
1
1
1*
1
1
1
1
1
o
0
0
Left
Figure 1: The initial match matrix for
the considered example.
1 represents a match.
Correct matches are starred.
row 2(left)
row l(left)
Figure 2: Schematic of the initial match
volume constructed by stacking
up row match matrices.
1 2
25 26 27
50
:'''lpUt Layer
625
Figure 4: The sparsely connected NN
Figure 3: The neighborhood structure
considered
the initial match
with augmented inputs when the
volume. If used with Marr-Poggio,
neighborhood of Fig. 3 is used.
circles and crosses represent
excitatory and inhibitory neighbors
respectively.
in
334
Khotanzad and Lee
Actual
Marr-Poggio Neural Net
corresponding
disparity
in
gray
pixels
level
unlabelled
-4
-2
o
+2
o
+4
Figure 5: The results of disparity computation for six random-dot stereograms
which are used to train the NN. The Marr-Poggio results are also
shown.
| 335 |@word trial:2 tried:1 shot:1 initial:23 contains:1 disparity:11 ours:1 past:1 existing:1 si:2 happen:1 shape:3 progressively:1 half:2 selected:3 device:1 plane:4 record:1 node:20 along:6 constructed:1 consists:4 manner:1 inter:1 multi:1 automatically:2 actual:2 moreover:1 underlying:1 linearity:1 evolved:1 kind:1 string:1 developed:3 unified:1 finding:2 transformation:1 utilization:1 internally:1 grant:1 positive:1 engineering:1 dallas:2 limit:1 black:2 studied:1 suggests:1 smc:1 decided:1 camera:2 backpropagation:3 procedure:3 area:1 matching:2 word:1 suggest:2 get:2 selection:2 layered:1 applying:1 influence:2 restriction:1 equivalent:1 map:3 conventional:1 imposed:4 williams:1 survey:1 x32:2 qian:1 perceive:1 rule:5 regarded:1 marr:13 importantly:1 coordinate:3 diego:1 us:2 element:14 rumelhart:3 located:3 sparsely:3 cooperative:1 electrical:1 solved:2 region:7 connected:8 sun:2 mentioned:2 stereograms:7 employment:1 trained:2 solving:1 icassp:1 darpa:1 represented:1 train:1 distinct:1 chellappa:2 sejnowski:3 labeling:1 outside:2 neighborhood:7 exhaustive:1 quite:1 whose:1 posed:2 solve:2 final:3 associative:1 hoc:1 advantage:1 net:18 neighboring:1 starred:2 description:1 randomdot:1 inputoutput:1 requirement:1 object:2 help:1 illustrate:1 recurrent:1 pose:1 ij:1 subregion:1 implemented:1 involves:2 direction:3 radius:1 drawback:1 correct:7 subsequently:1 exploration:1 human:2 transparent:1 microstructure:1 biological:1 around:1 considered:10 mapping:13 cognition:1 sought:1 uniqueness:3 proc:3 outperformed:1 applicable:1 successfully:1 weighted:1 rough:1 mit:1 zhou:2 varying:1 casting:1 june:1 indicates:1 nn:16 eliminate:2 integrated:1 hidden:11 pixel:7 upward:1 among:2 flexible:4 priori:2 special:1 comprise:1 qiang:1 represents:1 report:1 connectionist:1 serious:1 few:1 oriented:1 randomly:1 composed:1 floating:2 phase:1 attempt:1 circular:1 xb:1 accurate:2 integral:1 capable:1 poggio:16 filled:1 circle:2 retains:1 assignment:1 stacking:2 subset:1 examining:1 dependency:1 st:3 density:1 international:1 retain:1 lee:8 systematic:1 again:1 ambiguity:1 conf:2 american:1 account:1 potential:1 includes:1 automation:1 explicitly:3 ad:1 later:2 view:1 portion:1 start:1 parallel:2 kaufmann:1 percept:1 correspond:1 yield:1 generalize:2 xli:1 disc:1 none:1 explain:1 touretzky:1 ed:2 pp:9 associated:1 con:2 back:2 feed:5 follow:2 response:1 april:2 formulation:2 though:1 stage:3 horizontal:2 nonlinear:2 propagation:2 continuity:1 gray:2 scientific:1 effect:1 concept:2 true:6 contain:1 evolution:1 laboratory:1 white:2 generalized:2 complete:2 image:15 wise:2 novel:1 superior:2 common:2 sigmoid:1 volume:20 discussed:2 smoothness:2 similarly:1 dot:7 surface:4 recent:1 termed:1 indispensable:1 seen:2 morgan:1 care:1 impose:1 redundant:1 signal:1 ii:1 stereogram:2 aggarwal:2 unlabelled:1 match:45 characterized:2 cross:2 coded:3 schematic:1 involving:3 multilayer:1 vision:3 represent:1 alireza:1 robotics:1 receive:1 addition:3 separately:1 publisher:1 rest:1 unlike:1 sr:1 strict:1 structural:1 architecture:2 identified:1 topology:1 reduce:2 translates:1 texas:2 six:5 stereo:15 speech:1 york:1 processed:1 mcclelland:1 generate:2 inhibitory:7 shifted:2 ledgements:1 vol:4 affected:1 threshold:1 imaging:3 imposition:1 extends:1 incompatible:1 layer:10 distinguish:2 correspondence:11 yielded:1 mda:1 constraint:26 scene:2 x2:4 aspect:2 department:1 according:1 explained:1 inti:2 taken:2 remains:1 previously:1 adopted:1 available:1 operation:2 appropriate:4 include:1 especially:1 psychophysical:2 southern:1 amongst:2 distance:3 distributive:1 code:1 modeled:1 relationship:2 ying:1 difficult:1 unfortunately:1 teach:1 acknow:1 negative:1 rise:1 proper:1 unknown:1 vertical:3 displaced:1 enabling:1 extended:1 hinton:1 frame:1 varied:1 arbitrary:1 toole:2 pair:9 cast:1 specified:1 connection:4 acoustic:1 learned:9 trans:1 able:1 suggested:2 usually:3 perception:3 including:1 epipolar:5 overlap:1 difficulty:1 advanced:1 scheme:1 eye:1 carried:1 sept:1 review:2 relative:3 embedded:1 fully:2 limitation:1 analogy:1 foundation:1 degree:1 article:1 viewpoint:1 row:13 excitatory:8 supported:1 wide:1 neighbor:5 distributed:2 regard:1 depth:8 gram:3 computes:1 forward:4 made:2 san:1 sj:1 xi:1 stereopsis:9 don:1 search:1 iterative:6 learn:5 transfer:1 ca:1 complex:4 necessarily:1 main:2 spread:1 dense:1 noise:1 augmented:3 fig:10 referred:1 fashion:1 xl:1 crude:1 learns:5 specific:1 jt:1 x:4 evidence:2 sit:1 essential:1 consist:3 restricting:1 false:3 illustrates:1 downward:1 chen:1 suited:1 smoothly:1 locality:2 forming:1 expressed:1 viewed:1 man:2 change:2 typical:1 called:1 total:1 tendency:1 internal:1 tested:5 |
2,592 | 3,350 | Random Sampling of States in Dynamic
Programming
Christopher G. Atkeson and Benjamin Stephens
Robotics Institute, Carnegie Mellon University
cga@cmu.edu, bstephens@cmu.edu
www.cs.cmu.edu/?cga, www.cs.cmu.edu/?bstephe1
Abstract
We combine three threads of research on approximate dynamic programming:
sparse random sampling of states, value function and policy approximation using
local models, and using local trajectory optimizers to globally optimize a policy
and associated value function. Our focus is on finding steady state policies for
deterministic time invariant discrete time control problems with continuous states
and actions often found in robotics. In this paper we show that we can now solve
problems we couldn?t solve previously.
1
Introduction
Optimal control provides a potentially useful methodology to design nonlinear control laws (policies) u = u(x) which give the appropriate action u for any state x. Dynamic programming provides
a way to find globally optimal control laws, given a one step cost (a.k.a. ?reward? or ?loss?) function
and the dynamics of the problem to be optimized. We focus on control problems with continuous
states and actions, deterministic time invariant discrete time dynamics xk+1 = f(xk , uk ), and a time
invariant one step cost function L(x, u). Policies for such time invariant problems will also be time
invariant. We assume we know the dynamics and one step cost function. Future work will address
simultaneously learning a dynamic model, finding a robust policy, and performing state estimation
with an erroneous partially learned model. One approach to dynamic programming is to approximate
the value function V (x) (the optimal total future cost from each state V (x) = ??
k=0 L(xk , uk )), and to
repeatedly solve the Bellman equation V (x) = minu (L(x, u) + V (f(x, u))) at sampled states x until
the value function estimates have converged to globally optimal values. We explore approximating
the value function and policy using many local models.
An example problem: We use one link pendulum swingup as an example problem in this introduction to provide the reader with a visualizable example of a value function and policy. In one
link pendulum swingup a motor at the base of the pendulum swings a rigid arm from the downward
stable equilibrium to the upright unstable equilibrium and balances the arm there (Figure 1). What
makes this challenging is that the one step cost function penalizes the amount of torque used and
the deviation of the current position from the goal. The controller must try to minimize the total
cost of the trajectory. The one step cost function for this example is a weighted sum of the squared
position errors (?: difference between current angles and the goal angles) and the squared torques
?: L(x, u) = 0.1?2 T + ?2 T where 0.1 weights the position error relative to the torque penalty, and
T is the time step of the simulation (0.01s). There are no costs associated with the joint velocity.
Figure 2 shows the value function and policy generated by dynamic programming.
One important thread of research on approximate dynamic programming is developing representations that adapt to the problem being solved and extend the range of problems that can be solved
with a reasonable amount of memory and time. Random sampling of states has been proposed by a
number of researchers [1, 2, 3, 4, 5, 6, 7]. In our case we add new randomly selected states as we
1
1
1
1
1
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0
0.2
0
0.2
0
0.2
0
0.2
0
0.2
0
0.2
0
0.2
0
0.2
0
0.2
0
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?1
?1
?1
?1
?1
?1
?1
?1
?1
?1
Figure 1: Configurations from the simulated one link pendulum optimal trajectory every half a second and at
the end of the trajectory.
solve the problem, allowing the ?grid? that results to reflect the local complexity of the value function as we generate it. Figure 2:right shows such a randomly generated set of states superimposed
on a contour plot of the value function for one link swingup.
Another important thread in our work on applied dynamic programming is developing ways for grids
or random samples to be as sparse as possible. One technique that we apply here is to represent full
trajectories from each sampled state to the goal, and to refine each trajectory using local trajectory
optimization [8]. Figure 2:right shows a set of optimized trajectories from the sampled states to the
goal. One key aspect of the local trajectory optimizer we use is that it provides a local quadratic
model of the value function and a local linear model of the policy at the sampled state. These local
models help our function approximators handle sparsely sampled states. To obtain globally optimal
solutions, we incorporate exchange of information between non-neighboring sampled states.
On what problems will the proposed approach work? We believe our approach can discover
underlying simplicity in many typical problems. An example of a problem that appears complex but
is actually simple is a problem with linear dynamics and a quadratic one step cost function. Dynamic programming can be done for linear quadratic regulator (LQR) problems even with hundreds
of dimensions and it is not necessary to build a grid of states [9]. The cost of representing the value
function is quadratic in the dimensionality of the state. The cost of performing a ?sweep? or update
of the value function is at most cubic in the state dimensionality. Continuous states and actions
are easy to handle. Perhaps many problems, such as the examples in this paper, have simplifying
characteristics similar to LQR problems. For example, problems that are only ?slightly? nonlinear
and have a locally quadratic cost function may be solvable with quite sparse representations. One
goal of our work is to develop methods that do not immediately build a hugely expensive representation if it is not necessary, and attempt to harness simple and inexpensive parallel local planning
to solve complex planning problems. Another goal of our work is to develop methods that can take
advantage of situations where only a small amount of global interaction is necessary to enable local
planners capable of solving local problems to find globally optimal solutions.
2
Related Work
Random state selection: Random grids and random sampling are well known in numerical integration, finite element methods, and partial differential equations. Rust applied random sampling
of states to dynamic programming [1, 10]. He showed that random sampling of states can avoid
the curse of dimensionality for stochastic dynamic programming problems with a finite set of discrete actions. This theoretical result focused on the cost of computing the expectation term in the
stochastic version of the Bellman equation. [11] claim the assumptions used in [1] are unrealistically
restrictive, and [12] point out that the complexity of Rust?s approach is proportional to the Lipschitz
constant of the problem data, which often increases exponentially with increasing dimensions. The
practicality and usefulness of random sampling of states in deterministic dynamic programming with
continuous actions (the focus of our paper) remains an open question. We note that deterministic
problems are usually more difficult to solve since the random element in the stochastic dynamics
smooths the dynamics and makes them easier to sample. Alternatives to random sampling of states
are irregular or adaptive grids [13], but in our experience they still require too many representational
resources as the problem dimensionality increases.
In reinforcement learning random sampling of states is sometimes used to provide training data for
function approximation of the value function. Reinforcement learning also uses random exploration
for several purposes. In model-free approaches exploration is used to find actions and states that lead
to better outcomes. This process is somewhat analogous to the random state sampling described in
this paper for model-based approaches. In model-based approaches, exploration is also used to
improve the model of the task. In our paper it is assumed a model of the task is available, so this
type of exploration is not necessary.
2
Value function for one link example
random initial states and trajectories for one link example
Policy for one link example
10
value
10
0
3
2
10
6
0
4
velocity (r/s)
torque (Nm)
8
20
?10
3
2
1
1
0
0
?1
?2
?3
?4
?5
position (r)
?6
20
15
10
5
0
?5
?10
?15
?20
?6
?3
?4
?5
position (r)
velocity (r/s)
0
?2
?4
?1
?2
2
?6
20
15
5
10
0
?5
velocity (r/s)
?10
?15
?20
?8
?10
?6
?5
?4
?3
?2
?1
0
1
2
3
position (r)
Figure 2: Left and Middle: The value function and policy for a one link pendulum swingup. The optimal
trajectory is shown as a yellow line in the value function plot, and as a black line with a yellow border in the
policy plot. The value function is cut off above 20 so we can see the details of the part of the value function that
determines the optimal trajectory. The goal is at the state (0,0). Right: Random states (dots) and trajectories
(black lines) used to plan one link swingup, superimposed on a contour map of the value function.
In the field of Partially Observable Markov Decision Processes (POMDPs) there has been some
work on randomly sampling belief states, and also using local models of the value function and its
first derivative at each randomly sampled belief state (for example [2, 3, 4, 5, 6, 7]). Thrun explored
random sampling of belief states where the underlying states and actions were continuous [7]. He
used a nearest neighbor scheme to perform value function interpolation, and a coverage test to decide
whether to accept a new random state (is a new random state far enough from existing states?) rather
than a surprise test (is the value of the new random state predicted incorrectly?).
In robot planning for obstacle avoidance random sampling of states is now quite popular [14]. Probabilistic Road Map (PRM) methods build a graph of plans between randomly selected states. Rapidly
Exploring Random Trees (RRTs) grow paths or trajectories towards randomly selected states. In
general it is difficult to modify PRM and RRT approaches to find optimal paths, and the resulting
algorithms based on RRTs are very similar to A* search.
3
Combining Random State Sampling With Local Optimization
The process of using the Bellman equation to update a representation of the value function by minimizing over all actions at a state is referred to as value iteration. Standard value iteration represents
the value function and associated policy using multidimensional tables, with each entry in the table
corresponding to a particular state. In our approach we randomly select states, and associate with
each state a local quadratic model of the value function and a local linear model of the policy. Our
approach generalizes value iteration, and has the following components: 1. There is a ?global?
function approximator for both the value function and the policy. In our current implementation the
value function and policy are represented through a combination of sampled and parametric representations, building global approximations by combining local models. 2. It is possible to estimate
the value of a state in two ways. The first is to use the approximated value function. The second is
our analog of using the Bellman equation: use the cost of a trajectory starting from the state under
consideration and following the current global policy. The trajectory is optimized using local trajectory optimization. 3. As in a Bellman update, there is a way to globally optimize the value of
a state by considering many possible ?actions?. In our approach we consider many local policies
associated with different stored states.
Taking advantage of goal states: For problems with goal states there are several ways to speed
up convergence. In cases where LQR techniques apply [9], we use the policy obtained by solving
the corresponding LQR control problem at the goal as the default policy everywhere, to which the
policy computed by dynamic programming is added. [15] plots an example of a default policy and
the policy generated by dynamic programming for comparison. We limit the outputs of this default
policy. In setting up the goal LQR controller, a radius is established and tested within which the
goal LQR controller always works and achieves close to the predicted optimal cost. This has the
effect of making of enlarging the goal. If the dynamic programming process can get within the LQR
radius of the goal, it can use only the default policy to go the rest of the way. If it is not possible to
create a goal LQR controller due to a hard nonlinearity, or if there is no goal state, it does not have
to be done as the goal controller merely accelerates the solution process. The proposed technique
can be generalized in a straightforward way to use any default goal policy. In this paper the swingup
3
problems use an LQR default policy, which was limited for each action dimension to ?5Nm. For
the balance problem we did not use a default policy. We note that for the swingup problems shown
here the default LQR policy is capable of balancing the inverted pendulum at the goal, but is not
capable of swinging up the pendulum to the goal.
We also initially only generate the value function and policy in the region near the goal. This solved
region is gradually increased in size by increasing a value function threshold. Examples of regions
bounded by a constant value are shown by the value function contours in Figure 2. [16] describes
how to handle periodic tasks which have no goal states, and also discontinuities in the dynamics.
Local models of the value function and policy: We need to represent value functions as sparsely
as possible. We propose a hybrid tabular and parametric approach: parametric local models of the
value function and policy are represented at sampled locations. This representation is similar to
using many Taylor series approximations of a function at different points. At each sampled state x p
the local quadratic model for the value function is:
1
p
V p (x) ? V0 +Vxp x? + x? TVxxp x?
2
(1)
p
where x? = x ? x p is the vector from the stored state x p , V0 is the constant term of the local model,
p
p
Vx is the first derivative of the local model (and the value function) at x p , and Vxx is the second
p
derivative of the local model (and the value function) at x . The local linear model for the policy is:
p
u p (x) = u0 ? K p x?
(2)
p
where u0 is the constant term of the local policy, and K p is the first derivative of the local policy and
also the gain matrix for a local linear controller.
Creating the local model: These local models of the value function can be created using Differential Dynamic Programming (DDP) [17, 18, 8, 16]. This local trajectory optimization process
is similar to linear quadratic regulator design in that a local model of the value function is produced. In DDP, value function and policy models are produced at each point along a trajectory.
Suppose at a point (xi , ui ) we have 1) a local second order Taylor series approximation of the optimal value function: V i (x) ? V0i + Vxi x? + 21 x? TVxxi x? where x? = x ? xi . 2) a local second order Taylor
series approximation of the robot dynamics, which can be learned using local models of the dynamics (fix and fiu correspond to A and B of the linear plant model used in linear quadratic regulator
(LQR) design): xk+1 = fi (x, u) ? fi0 + fix x? + fiu u? + 12 x? T fixx x? + x? T fixu u? + 12 u? T fiuu u? where u? = u ? ui , and
3) a local second order Taylor series approximation of the one step cost, which is often known
analytically for human specified criteria (Lxx and Luu correspond to Q and R of LQR design):
i x
i u
i u
? + x? T Lxu
? + 21 u? T Luu
?
Li (x, u) ? L0i + Lxi x? + Lui u? + 12 x? T Lxx
Given a trajectory, one can integrate the value function and its first and second spatial derivatives
backwards in time to compute an improved value function and policy. We utilize the ?Q function?
notation from reinforcement learning: Q(x, u) = L(x, u) + V (f(x, u)). The backward sweep takes
the following form (in discrete time):
Qix = Lxi +Vxi fix ;
i
Qixx = Lxx
+Vxi fixx + (fix )TVxxi fix ;
Qiu = Lui +Vxi fiu
(3)
i
Qiux = Lux
+Vxi fiux + (fiu )TVxxi fix ;
i
Qiuu = Luu
+Vxi fiuu + (fiu )TVxxi fiu
(4)
Ki = (Qiuu )?1 Qiux
(5)
?ui = (Qiuu )?1 Qiu ;
Vxi?1 = Qix ? Qiu Ki ; Vxxi?1 = Qixx ? Qixu Ki
(6)
where subscripts indicate derivatives and superscripts indicate the trajectory index. After the backward sweep, forward integration can be used to update the trajectory itself: uinew = ui ? ?ui ?
Ki (xinew ? xi ). We note that the cost of this approach grows at most cubically rather than exponentially with respect to the dimensionality of the state.
In problems that have a goal state, we can generate a trajectory from each stored state all the way to
the goal. The cost of this trajectory is an upper bound on the true value of the state, and is used to
bound the estimated value for the old state.
Utilizing the local models: For the purpose of explaining our algorithm, let?s assume we already
have a set of sampled states, each of which has a local model of the value function and the policy.
4
How should we use these multiple local models? The simplest approach is to just use the predictions
of the nearest sampled state, which is what we currently do. We use a kd-tree to efficiently find
nearest neighbors, but there are many other approaches that will find nearby stored states efficiently.
In the future we will investigate using other methods to combine local model predictions from nearby
stored states: distance weighted averaging (kernel regression), linear locally weighted regression,
and quadratic locally weighted regression for value functions.
Creating new random states: For tasks with a goal state, we initialize the set of stored states by
storing the goal state itself. We have explored a number of distributions to select additional states
from: uniform within bounds on the states; Gaussian with the mean at the goal; sampling near
existing states; and sampling from an underlying low resolution regular grid. The uniform approach
is a useful default approach, which we use in the swingup examples, the Gaussian approach provides
a simple way to tune the distribution, sampling near existing states provides a way to efficiently
sample while growing the solved region in high dimensions, and sampling from an underlying low
resolution grid seems to perform well when only a small number of stored states are used (similar to
using low dispersion sequences [1, 14]). A key point of our approach is that we do not generate the
random states in advance but instead select them as the algorithm progresses. This allows us to apply
an acceptance criteria to candidate states, which we describe in the next paragraph. We have also
explored changing the distribution we generate candidate states from as the algorithm progresses,
for example using a mixture of Gaussians with the Gaussians centered on existing stored states.
Another reasonable hybrid approach would be to initially sample from a grid, and then bias more
general sampling to regions of higher value function approximation error.
Acceptance criteria for candidate states: We have several criteria to accept or reject states to be
permanently stored. In the future we will explore ?forgetting? or removing stored states, but at this
point we apply all memory control techniques at the storage event. To focus the search and limit the
volume considered, a steadily increasing value limit is maintained (Vlimit ), which is increased slightly
after each use. The approximated value function is used to predict the value of the candidate state.
If the prediction is above Vlimit , the candidate state is rejected. Otherwise, a trajectory is created
from the candidate state using the current approximated policy, and then locally optimized. If the
value of that trajectory is above Vlimit , the candidate state is rejected. If the value of the trajectory is
within 10% of the predicted value, the candidate state is rejected. Only ?surprises? are stored. For
problems with a goal state, if the trajectory does not reach the goal the candidate state is rejected.
Other criteria such as an A* like criteria (cost-to-go(x) + cost-from-start(x) > threshold) can be
used to reject candidate states. All of the thresholds mentioned can be changed as the algorithm
progresses. For example, Vlimit is gradually increased during the solution process, to increase the
volume considered by the algorithm. We currently use a 10% ?surprise? threshold. In future work
we will explore starting with a larger threshold and decreasing this threshold with time, to further
reduce the number of samples accepted and stored while improving convergence. It is possible
to take the distance to the nearest sampled state into account in the acceptance criteria for new
samples. The common approach of accepting states beyond a distance threshold enforces a minimum
resolution, and leads to potentially severe curse of dimensionality effects. Rejecting states that are
too close to existing states will increase the error in representing the value function, but may be a
way for preventing too many samples near complex regions of the value functions that have little
practical effect. For example, we often do not need much accuracy in representing the value function
near policy discontinuities where the value function has discontinuities in its spatial derivative and
?creases?. In these areas the trajectories typically move away from the discontinuities, and the
details of the value function have little effect.
In the current implementation, after a candidate state is accepted, the state in the database whose
local model was used to make the prediction is re-optimized including information from the newly
added point, since the prediction was wrong and the new point?s policy may lead to a better value
for that state.
Creating a trajectory from a state: We create a trajectory from a candidate state or refine a trajectory from a stored state in the same way. The first step is to use the current approximated policy until
the goal or a time limit is reached. In the current implementation this involves finding the stored
state nearest to the current state in the trajectory and using its locally linear policy to compute the
action on each time step. The second step is to locally optimize the trajectory. We use Differential
Dynamic Programming (DDP) in the current implementation [17, 18, 8, 16]. In the current implementation we do not save the trajectory but only the local models from its start. If the cost of the
5
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?1
?1
?1
?1
?1
?1
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?1
?1
?1
?1
?1
?1
0.2
0.2
Figure 3: Configurations from the simulated two link pendulum optimal swing up trajectory every fifth of a
second and the end of the trajectory.
trajectory is more than the currently stored value for the state, we reject the new value, as the values
all come from actual trajectories and are upper bounds for the true value. We always keep the lowest
upper bound.
Combining parallel greedy local optimizers to perform global optimization: As currently described, the algorithm finds a locally optimal policy, but not necessarily a globally optimal policy.
For example, the stored states could be divided into two sets of nearest neighbors. One set could
have a suboptimal policy, but have no interaction with the other set of states that had a globally
optimal policy since no nearest neighbor relations joined the two sets. We expect the locally optimal
policies to be fairly good because we 1) gradually increase the solved volume and 2) use local optimizers. Given local optimization of actions, gradually increasing the solved volume will result in
a globally optimal policy if the boundary of this volume never touches a non adjacent section of itself. Figures 2 and 2 show the creases in the value function (discontinuities in the spatial derivative)
and corresponding discontinuities in the policy that typically result when the constant cost contour
touches a non adjacent section of itself as Vlimit is increased.
In theory, the approach we have described will produce a globally optimal policy if it has infinite
resolution and all the stored states form a densely connected set in terms of nearest neighbor relations [8]. By enforcing consistency of the local value function models across all nearest neighbor
pairs, we can create a globally consistent value function estimate. Consistency means that any state?s
local model correctly predicts values of nearby states. If the value function estimate is consistent
everywhere, the Bellman equation is solved and we have a globally optimal policy. We can enforce consistency of nearest neighbor value functions by 1) using the policy of one state of a pair
to reoptimize the trajectory of the other state of the pair and vice versa, and 2) adding more stored
states in between nearest neighbors that continue to disagree [8]. This approach is similar to using
the method of characteristics to solve partial differential equations and finding value functions for
games.
In practice, we cannot achieve infinite resolution. To increase the likelihood of finding a globally
optimal policy with a limited resolution of stored states, we need an analog to exploration and to
global minimization with respect to actions found in the Bellman equation. We approximate this
process by periodically reoptimizing each stored state using the policies of other stored states. As
more and more states are stored, and many alternative stored states are considered in optimizing any
given stored state, a wide range of actions are considered for each state. We run a reoptimization
phase of the algorithm after every N (typically 100) states have been stored. There are several ways
to design this reoptimization phase. Each state could use the policy of a nearest neighbor, or a
randomly chosen neighbor with the distribution being distance dependent, or just choosing another
state randomly with no consideration of distance (what we currently do). [8] describes how to follow
a policy of another stored state if its trajectory is stored, or can be recomputed as needed. In this
work we explored a different approach that does not require each stored state to save its trajectory
or recompute it. To ?follow? the policy of another state, we follow the locally linear policy for that
state until the trajectory begins to go away from the state. At that point we switch to following the
globally approximated policy. Since we apply this reoptimization process periodically with different
randomly selected policies, over time we explore using a wide range of actions from each state.
6
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?1
?1
?1
?1
?1
?1
1
1
1
1
1
1
0.8
0.8
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.2
0
0
0
0
0
0
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.4
?0.4
?0.4
?0.4
?0.4
?0.4
?0.6
?0.6
?0.6
?0.6
?0.6
?0.6
?0.8
?0.8
?0.8
?0.8
?0.8
?0.8
?1
?1
?1
?1
?1
?1
0.2
0.2
Figure 4: Configurations from the simulated three link pendulum optimal trajectory every tenth of a second
and at the end of the trajectory.
4
Results
In addition to the one link swingup example presented in the introduction, we present results on
two link swingup (4 dimensional state) and three link swingup (6 dimensional state). A companion
paper using these techniques to explore how multiple balance strategies can be generated from one
optimization criterion is [19]. Further results, including some for a four link (8 dimensional state)
standing robot are presented.
One link pendulum swingup: For the one link swingup case, the random state approach found
a globally optimal trajectory (the same trajectory found by our grid based approaches [15]) after
adding only 63 random states. Figure 2:right shows the distribution of states and their trajectories
superimposed on a contour map of the value function for one link swingup.
Two link pendulum swingup: For the two link swingup case, the random state approach finds
what we believe is a globally optimal trajectory (the same trajectory found by our grid based approaches [15]) after storing an average of 12000 random states (Figure 3). In this case the state has
four dimensions (a position and velocity for each joint) and a two dimensional action (a torque at
each joint). The one step cost function was a weighted sum of the squared position errors and the
squared torques: L(x, u) = 0.1(?21 + ?22 )T + (?21 + ?22 )T. 0.1 weights the position errors relative to
the torque penalty, T is the time step of the simulation (0.01s), and there were no costs associated
with joint velocities. The approximately 12000 sampled states should be compared to the millions
of states used in grid-based approaches. A 60x60x60x60 grid with almost 13 million states failed
to find a trajectory as good as this one, while a 100x100x100x100 grid with 100 million states did
find the same trajectory. In 13 runs with different random number generator seeds, the mean number
of states stored at convergence is 11430. All but two of the runs converged after storing less than
13000 states, and all runs converged after storing 27000 states.
Three link pendulum swingup: For the three link swingup case, the random state approach found
a good trajectory after storing less than 22000 random states (Figure 4). We have not yet solved
this problem a sufficient number of times to be convinced this is a global optimum, and we do not
have a solution based on a regular grid available for comparison. We were not able to solve this
problem using regular grid-based approaches due to limited state resolution: 22x22x22x22x38x44
= 391,676,032 states filled our largest memory. As in the previous examples, the one step cost
function was a weighted sum of the squared position errors and the squared torques: L(x, u) =
0.1(?21 + ?22 + ?23 )T + (?21 + ?22 + ?23 )T.
5
Conclusion
We have combined random sampling of states and local trajectory optimization to create a promising approach to practical dynamic programming for robot control problems. We are able to solve
problems we couldn?t solve before due to memory limitations. Future work will optimize aspects
and variants of this approach.
7
Acknowledgments
This material is based upon work supported in part by the DARPA Learning Locomotion Program
and the National Science Foundation under grants CNS-0224419, DGE-0333420, ECS-0325383,
and EEC-0540865.
References
[1] J. Rust. Using randomization to break the curse of dimensionality. Econometrica, 65(3):487?
516, 1997.
[2] M. Hauskrecht. Incremental methods for computing bounds in partially observable Markov
decision processes. In Proceedings of the 14th National Conference on Artificial Intelligence
(AAAI-97), pages 734?739, Providence, Rhode Island, 1997. AAAI Press / MIT Press.
[3] N.L. Zhang and W. Zhang. Speeding up the convergence of value iteration in partially observable Markov decision processes. JAIR, 14:29?51, 2001.
[4] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for
POMDPs. In International Joint Conference on Artificial Intelligence (IJCAI), 2003.
[5] T. Smith and R. Simmons. Heuristic search value iteration for POMDPs. In Uncertainty in
Artificial Intelligence, 2004.
[6] M.T.J. Spaan and Nikos V. A point-based POMDP algorithm for robot planning. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2399?2404,
New Orleans, Louisiana, April 2004.
[7] S. Thrun. Monte Carlo POMDPs. In S.A. Solla, T.K. Leen, and K.-R. M?uller, editors, Advances
in Neural Information Processing 12, pages 1064?1070. MIT Press, 2000.
[8] C. G. Atkeson. Using local trajectory optimizers to speed up global optimization in dynamic
programming. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, editors, Advances in
Neural Information Processing Systems, volume 6, pages 663?670. Morgan Kaufmann Publishers, Inc., 1994.
[9] F. L. Lewis and V. L. Syrmos. Optimal Control, 2nd Edition. Wiley-Interscience, 1995.
[10] C. Szepesv?ari. Efficient approximate planning in continuous space Markovian decision problems. AI Communications, 13(3):163?176, 2001.
[11] J. N. Tsitsiklis and Van B. Roy. Regression methods for pricing complex American-style
options. IEEE-NN, 12:694?703, July 2001.
[12] V. D. Blondel and J. N. Tsitsiklis. A survey of computational complexity results in systems
and control, 2000.
[13] R. Munos and A. W. Moore. Variable resolution discretization in optimal control. Machine
Learning Journal, 49:291?323, 2002.
[14] S. M. LaValle. Planning Algorithms. Cambridge University Press, 2006.
[15] C. G. Atkeson. Randomly sampling actions in dynamic programming. In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning
(ADPRL), 2007.
[16] C. G. Atkeson and J. Morimoto. Nonparametric representation of a policies and value functions: A trajectory based approach. In Advances in Neural Information Processing Systems 15.
MIT Press, 2003.
[17] P. Dyer and S. R. McReynolds. The Computation and Theory of Optimal Control. Academic
Press, New York, NY, 1970.
[18] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, New York,
NY, 1970.
[19] C. G. Atkeson and B. Stephens. Multiple balance strategies from one optimization criterion.
In Humanoids, 2007.
8
| 3350 |@word middle:1 version:1 seems:1 nd:1 open:1 simulation:2 simplifying:1 initial:1 configuration:3 series:4 lqr:12 existing:5 current:11 discretization:1 yet:1 must:1 periodically:2 numerical:1 motor:1 plot:4 update:4 rrt:1 half:1 selected:4 greedy:1 intelligence:3 xk:4 smith:1 accepting:1 provides:5 recompute:1 location:1 zhang:2 along:1 differential:5 symposium:1 lux:1 combine:2 interscience:1 paragraph:1 blondel:1 forgetting:1 alspector:1 planning:6 growing:1 bellman:7 torque:8 globally:16 decreasing:1 little:2 curse:3 actual:1 considering:1 increasing:4 begin:1 discover:1 underlying:4 bounded:1 notation:1 lowest:1 what:5 finding:5 hauskrecht:1 every:4 multidimensional:1 wrong:1 uk:2 control:12 grant:1 before:1 local:50 modify:1 limit:4 subscript:1 path:2 interpolation:1 approximately:1 rhode:1 black:2 challenging:1 limited:3 range:3 practical:2 acknowledgment:1 enforces:1 orleans:1 practice:1 optimizers:4 area:1 reject:3 road:1 regular:3 get:1 cannot:1 close:2 selection:1 storage:1 www:2 optimize:4 deterministic:4 map:3 vxi:7 go:3 straightforward:1 hugely:1 starting:2 focused:1 swinging:1 resolution:8 simplicity:1 pomdp:1 immediately:1 survey:1 avoidance:1 utilizing:1 handle:3 analogous:1 simmons:1 suppose:1 programming:21 us:1 locomotion:1 associate:1 velocity:6 element:2 expensive:1 approximated:5 roy:1 sparsely:2 cut:1 database:1 predicts:1 solved:8 region:6 connected:1 solla:1 mentioned:1 benjamin:1 complexity:3 ui:5 reward:1 econometrica:1 dynamic:31 gerald:1 solving:2 upon:1 joint:5 darpa:1 represented:2 vxx:1 describe:1 monte:1 artificial:3 couldn:2 outcome:1 choosing:1 quite:2 whose:1 larger:1 solve:10 heuristic:1 otherwise:1 itself:4 superscript:1 advantage:2 sequence:1 propose:1 interaction:2 neighboring:1 combining:3 rapidly:1 achieve:1 representational:1 fi0:1 mayne:1 convergence:4 ijcai:1 optimum:1 produce:1 incremental:1 help:1 develop:2 nearest:12 progress:3 coverage:1 c:2 predicted:3 indicate:2 involves:1 come:1 radius:2 mcreynolds:1 stochastic:3 exploration:5 vx:1 human:1 enable:1 centered:1 material:1 exchange:1 require:2 adprl:1 fix:6 randomization:1 exploring:1 considered:4 minu:1 equilibrium:2 predict:1 prm:2 claim:1 seed:1 optimizer:1 achieves:1 purpose:2 estimation:1 currently:5 largest:1 vice:1 create:4 weighted:6 minimization:1 uller:1 mit:3 always:2 gaussian:2 rather:2 avoid:1 focus:4 superimposed:3 likelihood:1 elsevier:1 dependent:1 rigid:1 nn:1 cubically:1 typically:3 accept:2 initially:2 relation:2 plan:2 spatial:3 integration:2 initialize:1 fairly:1 field:1 never:1 sampling:21 represents:1 future:6 tabular:1 gordon:1 randomly:11 simultaneously:1 densely:1 national:2 phase:2 cns:1 attempt:1 lavalle:1 acceptance:3 investigate:1 fiu:6 severe:1 mixture:1 jacobson:1 capable:3 partial:2 necessary:4 experience:1 tree:2 filled:1 taylor:4 old:1 penalizes:1 re:1 theoretical:1 increased:4 obstacle:1 markovian:1 cost:25 deviation:1 entry:1 v0i:1 hundred:1 usefulness:1 uniform:2 too:3 stored:29 providence:1 periodic:1 eec:1 combined:1 international:3 standing:1 probabilistic:1 off:1 squared:6 reflect:1 nm:2 aaai:2 creating:3 american:1 derivative:8 style:1 li:1 account:1 automation:1 inc:1 try:1 break:1 pendulum:12 reached:1 start:2 option:1 parallel:2 minimize:1 morimoto:1 accuracy:1 kaufmann:1 characteristic:2 efficiently:3 correspond:2 yellow:2 rejecting:1 produced:2 carlo:1 trajectory:56 pomdps:4 researcher:1 converged:3 reach:1 inexpensive:1 steadily:1 associated:5 sampled:14 gain:1 newly:1 popular:1 anytime:1 dimensionality:7 actually:1 appears:1 higher:1 jair:1 follow:3 methodology:1 harness:1 improved:1 april:1 leen:1 done:2 just:2 rejected:4 until:3 christopher:1 touch:2 nonlinear:2 pineau:1 perhaps:1 pricing:1 believe:2 grows:1 dge:1 building:1 effect:4 true:2 swing:2 analytically:1 moore:1 adjacent:2 during:1 game:1 maintained:1 steady:1 criterion:9 generalized:1 consideration:2 jack:1 fi:1 ari:1 common:1 rust:3 exponentially:2 volume:6 million:3 extend:1 he:2 analog:2 mellon:1 versa:1 cambridge:1 ai:1 grid:15 l0i:1 consistency:3 nonlinearity:1 had:1 dot:1 stable:1 robot:5 v0:2 base:1 add:1 showed:1 optimizing:1 tesauro:1 visualizable:1 continue:1 approximators:1 joshua:1 inverted:1 morgan:1 minimum:1 additional:1 somewhat:1 reoptimization:3 nikos:1 july:1 stephen:2 u0:2 full:1 multiple:3 smooth:1 adapt:1 academic:1 crease:2 divided:1 prediction:5 variant:1 regression:4 controller:6 cmu:4 expectation:1 iteration:6 represent:2 sometimes:1 kernel:1 robotics:3 irregular:1 addition:1 unrealistically:1 szepesv:1 grow:1 publisher:1 rest:1 vxxi:1 cowan:1 near:5 backwards:1 easy:1 enough:1 switch:1 suboptimal:1 reduce:1 thread:3 whether:1 penalty:2 york:2 action:18 repeatedly:1 useful:2 cga:2 tune:1 amount:3 nonparametric:1 locally:9 simplest:1 generate:5 estimated:1 correctly:1 carnegie:1 discrete:4 key:2 recomputed:1 four:2 threshold:7 changing:1 tenth:1 utilize:1 backward:2 graph:1 merely:1 sum:3 run:4 angle:2 everywhere:2 uncertainty:1 planner:1 reader:1 reasonable:2 decide:1 lxx:3 almost:1 decision:4 accelerates:1 ki:4 bound:6 ddp:3 quadratic:10 refine:2 nearby:3 aspect:2 regulator:3 speed:2 performing:2 developing:2 combination:1 kd:1 describes:2 slightly:2 across:1 island:1 spaan:1 making:1 invariant:5 gradually:4 equation:8 resource:1 previously:1 remains:1 needed:1 know:1 dyer:1 end:3 available:2 generalizes:1 gaussians:2 apply:5 away:2 appropriate:1 enforce:1 save:2 alternative:2 lxi:2 permanently:1 restrictive:1 practicality:1 build:3 approximating:1 sweep:3 move:1 question:1 added:2 already:1 parametric:3 strategy:2 distance:5 link:22 simulated:3 thrun:3 luu:3 unstable:1 enforcing:1 index:1 balance:4 minimizing:1 difficult:2 potentially:2 design:5 implementation:5 policy:63 perform:3 allowing:1 upper:3 disagree:1 dispersion:1 markov:3 finite:2 incorrectly:1 situation:1 communication:1 pair:3 specified:1 optimized:5 learned:2 established:1 discontinuity:6 address:1 beyond:1 able:2 usually:1 program:1 including:2 memory:4 belief:3 event:1 hybrid:2 solvable:1 arm:2 representing:3 scheme:1 improve:1 created:2 speeding:1 swingup:18 relative:2 law:2 loss:1 plant:1 expect:1 limitation:1 proportional:1 approximator:1 generator:1 foundation:1 integrate:1 humanoid:1 sufficient:1 consistent:2 editor:2 storing:5 balancing:1 qix:2 changed:1 convinced:1 supported:1 free:1 tsitsiklis:2 bias:1 institute:1 neighbor:10 explaining:1 taking:1 wide:2 munos:1 fifth:1 sparse:3 van:1 boundary:1 dimension:5 default:9 contour:5 preventing:1 forward:1 adaptive:1 reinforcement:4 atkeson:5 far:1 ec:1 approximate:6 observable:3 keep:1 global:8 assumed:1 xi:3 continuous:6 search:3 table:2 promising:1 robust:1 improving:1 complex:4 necessarily:1 did:2 border:1 edition:1 qiu:3 referred:1 cubic:1 ny:2 wiley:1 position:10 candidate:12 removing:1 enlarging:1 erroneous:1 companion:1 explored:4 adding:2 downward:1 easier:1 surprise:3 explore:5 failed:1 partially:4 joined:1 louisiana:1 determines:1 lewis:1 goal:30 towards:1 lipschitz:1 hard:1 upright:1 typical:1 lui:2 infinite:2 averaging:1 total:2 accepted:2 select:3 incorporate:1 tested:1 |
2,593 | 3,351 | The Generalized FITC Approximation
Andrew Naish-Guzman & Sean Holden
Computer Laboratory
University of Cambridge
Cambridge, CB3 0FD. United Kingdom
{agpn2,sbh11}@cl.cam.ac.uk
Abstract
We present an efficient generalization of the sparse pseudo-input Gaussian process (SPGP) model developed by Snelson and Ghahramani [1], applying it to
binary classification problems. By taking advantage of the SPGP prior covariance structure, we derive a numerically stable algorithm with O(N M 2 ) training
complexity?asymptotically the same as related sparse methods such as the informative vector machine [2], but which more faithfully represents the posterior.
We present experimental results for several benchmark problems showing that
in many cases this allows an exceptional degree of sparsity without compromising accuracy. Following [1], we locate pseudo-inputs by gradient ascent on the
marginal likelihood, but exhibit occasions when this is likely to fail, for which we
suggest alternative solutions.
1 Introduction
Gaussian processes are a flexible and popular approach to non-parametric modelling. Their conceptually simple architecture is allied with a sound Bayesian foundation, so that not only does their
predictive power rival state-of-the-art discriminative methods such as the support vector machine,
but they also have the additional benefit of providing an estimate of variance, giving an error bar for
their prediction. However, there is a computational price to pay for this robust framework: the time
for training scales as N 3 for N data points, and the cost of prediction is O(N 2 ) per test case.
Recently, there has been great interest in finding sparse approximations to the full Gaussian process
(GP) in order to accelerate training and prediction times respectively to O(N M 2 ) and O(M 2 ),
where M ? N is the size of an auxiliary set, often a subset of the training data, termed variously
the inducing inputs, pseudo-inputs or the active set [3, 4, 5, 2, 6, 7, 1]; in this paper, we use the
terms interchangeably. Qui?nonero-Candela and Rasmussen [8] demonstrated how many of these
schemes are related through different approximations to the joint prior over training and test points.
In this paper we consider the ?fully independent training conditional? or FITC approximation, which
appeared originally in Snelson and Ghahramani [1] as the sparse pseudo-input GP (SPGP).
Restricted to a Gaussian noise model, the FITC approximation is entirely tractable; however, for
many problems, the Gaussian assumption is inappropriate. In this paper, we describe an extension
for non-Gaussian likelihoods, considering as an example probit noise for binary classification. This
is not only a common problem, but our results bear out the intuition that sparse methods are wellsuited: many data sets enjoy the property that class label does not fluctuate rapidly in the input space,
often allowing large regions to be summarized with very few inducing inputs. Contrast this with
regression problems, where higher frequency components in the latent signal demand the pseudoinputs appear in much higher density.
The informative vector machine (IVM) of Lawrence et al. [2] is another sparse GP method that has
been extended to non-Gaussian noise models. It is a subset of data method in which the active set
1
is grown incrementally from the training data using a fast information gain heuristic to find at each
stage the optimal inclusion. When a threshold number of points have been added, the algorithm
terminates: only data accumulated into the active set are relevant for prediction; remaining points
influence the model only in the weak sense of guiding previous steps of the algorithm. Our method is
an improvement in three regards: firstly, the FITC approximation makes use of all the data, yielding
for the same active set a closer approximation to the posterior distribution. Secondly, unlike the
standard IVM approach, we fit a stable posterior at each iteration, providing more accurate marginal
likelihood estimates, and derivatives thereof, to allow more reliable model selection. Finally, we
argue with experimental justification that the ability to locate inducing inputs independently of the
training data, as compared with the greedy approach that drives the IVM, can be a great advantage
in finding the sparsest solutions. We discuss these points and other related work in greater detail in
section 6.
The structure of this paper is as follows: in section 2 we describe the FITC approximation; this is
followed in section 3 by a detailed description of its representation for a non-Gaussian noise model;
section 4 provides a brief account of the procedure for model selection; experimental results appear
in section 5, which we discuss in section 6; our concluding remarks are in section 7.
2 The FITC approximation
Given a domain X and covariance function K(?, ?) ? X ? X ? R, a Gaussian process (GP) over
the space of real-valued functions of X specifies the joint distribution at any finite set X ? X :
p(f |X) = N (f ; 0 , Kff ) ,
where the f = {fn }N
n=1 are (latent) values associated with each xn ? X, and Kff is the Gram
matrix, the evaluation of the covariance function at all pairs (xi , xj ). We apply Bayes? rule to obtain
the posterior distribution over the f , given the observed X and y, which with the assumption of
i.i.d. Gaussian corrupted observations is also normally distributed. Predictions at X? are made by
marginalizing over f in the (Gaussian) joint p(f , f? |X, y, X? ). See [9] for a thorough introduction.
In order to derive the FITC approximation, we follow [8] and introduce a set of M inducing inputs
? = {?
?2, . . . , x
? M } with associated latent values u. By the consistency of GPs, we have
X
x1 , x
Z
Z
?
?
?
?
p(f , f? |X, X? , X) = p(f , f? |u, X, X? )p(u|X)du ? q(f |u, X)q(f? |u, X)p(u|
X)du,
? = N (u ; 0 , Kuu ). In the final expression we make the critical approximation by
where p(u|X)
imposing a conditional independence assumption on the joint prior over training and test cases:
communication between them must pass through the bottleneck of the inducing inputs. The FITC
approximation follows by letting
q(f |u, X) = N f ; Kfu K?1
(1)
uu u , diag (Kff ? Qff ) ,
?1
q(f? |u, X? ) = N f? ; K?u Kuu u , diag (K?? ? Q?? ) ,
(2)
.
where Qab = Kau K?1
uu Kub . Of interest for predictions is the posterior distribution over the inducing inputs; this is most efficiently obtained via Bayes? rule after inferring the distribution over f .1
Using (1) and marginalizing over the exact prior on u we obtain the approximate prior on f
Z
q(f |X) = N f ; Kfu K?1
uu u , diag (Kff ? Qff ) N (u ; 0 , Kuu ) du
= N (f ; 0 , Qff + diag (Kff ? Qff )) .
(3)
In the original paper, Snelson and Ghahramani placed the pseudo-inputs randomly and learned their
locations by non-linear optimization of the marginal likelihood. We have adopted the idea in this
paper, but as emphasized in [8], the FITC approximation is applicable regardless of how the inducing
1
We could also infer the posterior over u directly, rather than marginalizing over the inducing inputs as here.
Running EP in this setting, each site maintains a belief about the full M ?M covariance, and we obtain a slower
O(N M 3 ) algorithm. Furthermore, calculations to evaluate the derivatives of the log marginal likelihood with
? m are significantly complicated by their presence in both prior and likelihood.
respect to inducing inputs x
2
inputs are obtained, and other schemes for their initialization could equally well be married with our
algorithm.
In the case of classification, a sigmoidal function assigns class labels yn ? {?1} with a probability
that increases monotonically with the latent fn . We use the probit with bias ?,
Z yn (fn +?)
.
p(yn |fn , ?) = ?(yn (fn + ?)) =
N (z ; 0 , 1) dz.
(4)
??
The posterior distribution p(f |X, y) is only tractable for Gaussian likelihoods, hence we must resort
to a further approximation, either by generating Monte Carlo samples from it or fitting deterministically a Gaussian approximation. Of the latter methods, expectation propagation is possibly the most
accurate (at least for GP classification; see [10]), and it is the approach we follow below.
3 Inference
We begin with a very brief account of expectation propagation (EP); for more details, see [11,
12]. Suppose we have an intractable distribution over f whose unnormalized form factorizes into
a product of terms, such as a dense Gaussian prior t0 (f ) and a series of independent likelihoods
?
{tn (yn |fn )}N
n=1 . EP constructs the approximate posterior as a product of scaled site functions tn .
For computational tractability, these sites are usually chosen from an exponential family with natural
parameters ?, since in this case their product retains the same functional form as its components.
The Gaussian (?, ?) has a natural parameterization (b, ?) = (??1 ?, ? 12 ??1 ). If the prior is of
this form, its site function is exact:
p(f |y) =
N
N
Y
Y
1
t0 (f )
tn (yn |fn ) ? q(f ; ?) = t0 (f )
zn t?n (fn ; ?n ),
Z
n=1
n=1
(5)
where Z is the marginal likelihood and zn are the scale parameters. Ideally, we would choose ? at
the global minimum of some divergence measure d(pkq), but the necessary optimization
is usually
intractable. EP is an iterative procedure that finds a minimizer of KL p(f |y)kq(f ; ?) on a pointwise
basis: at each iteration, we select a new site n, and from the product of the cavity distribution formed
by the current marginal with the omission of that site, and the true likelihood term tn , we obtain the
so-called tilted distribution q n (fn ; ? \n ). A simpler optimization min?n KL q n (fn ; ? \n )kq(fn ; ?)
then fits only the parameters ?n : this is equivalent to moment matching between the two distributions,
with scale zn chosen to match the zeroth-order moments. After each site update, the moments at the
remaining sites are liable to change, and several iterations may be required before convergence.
In the discussion below we omit the moment calculations for the probit model, since they correspond
to those of traditional GP classification (for more details, consult [9]). Of greater interest is how the
mean and covariance structure of the approximate posterior is preserved. Examining the form of the
prior (3), we see the covariance consists of a diagonal component D0 and a rank-M term P0 M0 PT0 ,
where P0 = Kfu and M0 = K?1
uu (zero subscripts refer to these initial values; the matrices are
updated during the course of the EP iterations). Since the observations yn are generated i.i.d., we
can expect this decomposition to persist in the posterior.
EP requires efficient operations for marginalization to obtain p(fn ), and for updating the posterior
distribution after refining a site, as well as for refreshing the posterior to avoid loss of numerical
precision. Decomposing M = RT R into its Cholesky factor,2 we represent the posterior covariance
A and mean h by
A = D + PRT RPT ,
h = ? + P?,
2
Care must be taken that the factors share the correct` orientation.
When our environment offers only upper
?
Cholesky factors RT R, the initialization of R0 = chol K?1
uu can be achieved without computing the explicit
inverse via the following matrix rotations:
?
`
?T ?
R0 := rot180 chol rot180 (Kuu ) \ I .
3
where D is diagonal, ? is N ? 1 and ? is M ? 1. Writing pTn = P(n,?) and dn = Dnn ,
obtaining marginals in O(M 2 ).
hn = ?n + pTn ?,
Ann = dn + kRpn k
Now consider a change in the precision at site n by ?n . Define the vector e of length N such that
en = 1 and all other elements are zero. The new covariance Anew is obtained by inverting the sum
of the old precision matrix and the change in precision. If we let E = D?1 + ?n eeT , so that
E?1 = D ?
?n d2n
eeT
1 + ?n dn
(DED)?1 = D?1 ?
and
?n
eeT ,
1 + ?n dn
then from the matrix inversion lemma, A?1 = D?1 ?D?1 PRT (RPT D?1 PRT +I)?1 RPT D?1 ,
and incorporating the update to site n,
?1
Anew = E?1 ? E?1 D?1 PRT RPT (DED)?1 PRT ? I ? RPT D?1 PRT
RPT D?1 E?1
= Dnew + Pnew RTnew Rnew PTnew ,
where we expand the inversion to obtain a rank-1 downdate to the Cholesky factor R;3 in summary
?n d2n
?n dn
eeT O(1) update,
Pnew = P ?
epT
1 + ?n dn
1 + ?n dn n
?n
T T
T
p R
R
O(M 2 ) update.
= chol? R
I ? Rpn
1 + ?n Ann n
Dnew = D ?
Rnew
O(M ) update,
If the second site parameter, corresponding to precision times mean, is changed by bn , then
?1
T
A?1
h + bn e
=? hnew = Anew A?1
h + Anew bn e
new hnew = A
new ? ?n ee
= ?new + Pnew ?new ,
where
?new = ? +
(bn + ?n ?n )dn
e
1 + ?n dn
(O(1)),
?new = ? +
bn ? ?n hn T
R Rnew pn
1 + ?n dn new
O(M 2 ) .
It is necessary to refresh the covariance and mean every complete EP cycle to avoid loss of precision.
?1
Dnew = (I + D0 ?)
Rnew
D0
?1
(O(N )),
Pnew = (I + D0 ?)
P0
(O(N M )),
T /
?1
T
T
R0
= rot180 chol rot180 I + R0 P0 ? (I + D0 ?) P0 R0
O(N M 2 ) ,
where Rnew is obtained being careful to ensure the orientations of the factorizations are not mixed.
Finally, the mean is refreshed using
?new = Dnew b in O(N ),
?new = RTnew Rnew PTnew b in O(N M ),
where we have assumed h0 = 0.
Reviewing the algorithm above, we see that EP costs are dominated by the O(M 2 ) Cholesky downdate at each site inclusion. After visiting each of the N sites, we are advised to perform a full refresh,
which is O(N M 2 ), together leading to asymptotic complexity of O(N M 2 ).
3.1 Predictions
To make predictions, we marginalize out u from (2). Initially, Bayes? theorem is used to find the
posterior distribution over u from the inferred posterior over f :
?1
?T
p(u|f ) ? p(f |u)p(u) = N (u | R?1
0 c, R0 CR0 ),
where c = CR0 PT0 D?1
0 f
T
and C?1 = I + R0 PT0 D?1
0 P0 R0 .
If the factor 1+??nnAnn is negative, we make a rank-1 update, guaranteed to preserve the positive definite
property. Note that on rare occasions, loss of precision can cause a downdate to result in a non-positive definite
covariance matrix. If this occurs, we should abort the update and refresh the posterior from scratch. In any
case, to improve conditioning, it is recommended to add a small multiple of the identity to the prior M0 .
3
4
Let our posterior approximation be q(f |y) = N (f ; h , A). Hence
Z
?1
?T
p(u|y) ? p(u|f )q(f |y)df = N (u | R?1
0 ?, R0 ?R0 ),
?1
T ?1
T
where ? = CR0 PT0 D?1
0 h and ? = C + CR0 P0 D0 AD0 P0 R0 C.
Obtaining these terms is O(N M 2 ) if we take advantage of the structure of A; the most stable
method is via the Cholesky factorization of C?1 , rather than forming the explicit inverse. At x? ,
Z
p(f? |x? , y) = p(f? |u)p(u|y)du = N (f? | ?? , ??2 );
after precomputations, ?? = kT? RT0 ? is O(M ), and ??2 = k?? + kT? RT0 (? ? I) R0 k? is O(M 2 ).
In the classification domain, we will usually be interested in
!
Z
y? ??
p(y? |x? , y) = p(y? |f? )p(f? |x? , y)df? = ? p
.
1 + ??2
4 Model selection
EP provides an estimate of the log evidence by matching the 0th-order moments zn at each inclusion.
When our posterior approximation is exponential family, Seeger [12] shows the estimate to be
L=
N
X
log Cn + ?(? post ) ? ?(? prior ),
where
log Cn = log zn ? ?(? post ) + ?(? \n ),
n=1
where ?(?) denotes the log partition function and ? are again the natural parameters, with superscripts indicating prior, posterior and cavity. Of interest for model selection are derivatives of the
? ?}, respectively the kernel parameters,
marginal likelihood with respect to hyperparameters {?, X,
pseudo-input locations, and noise model parameters. When the EP fixed point conditions hold (that
is, the moments of the tilted distributions match the marginals up to second order for all sites),
??prior L = ? post ? ? prior
and
??n L = log zn ,
where ? denotes the moment parameters of the exponential family (for the Gaussian, these are
(?, ? + ??T )) and ?n is a parameter of site n (and does not feature in the prior). Finally, we need
prior
derivatives ?? ? prior and ?X
. The long-winded details are omitted, but by careful consideration
??
of the covariance structure, it is again possible to limit the complexity to O(N M 2 ).
Since we run EP until convergence, our estimates for the marginal likelihood and its derivatives are
accurate, allowing us reliablty to fit a model that maximizes the evidence. This is in contrast to the
IVM, in which sites excluded from the active set have parameters clamped to zero, and where those
included are not iterated to convergence, such that the necessary fixed point conditions do not hold.
A particular problem, suffered also by the similar algorithm in [13], is that derivative calculations
must be interleaved with site inclusions, and the latter operation tends to disrupt gradient information
gained from the previous step. These complications are all sidestepped in our SPGP implementation.
5 Experiments
We conducted tests on a variety of data, including two small sets from [14]4 and the benchmark
suite of R?atsch.5 The dimensionality of these classification problems ranges from two to sixty, and
the size of the training sets is of the order of 400 to 1000. Results are presented in table 1. For
crabs and the R?atsch sets, we average over ten folds of the data; for the synth problem, Ripley has
already divided the data into training and test partitions. Comparisons are made with the full GP
classifier, and the SVM, a widely-used discriminative model which in practice is found to yield
relatively sparse solutions; we consider also the IVM, a popular framework for building sparse
4
5
Available from http://www.stats.ox.ac.uk/pub/PRNN/.
Available from http://ida.first.fhg.de/projects/bench/benchmarks.htm.
5
Table 1: Test errors and predictive accuracy (smaller is better) for the GP classifier, the support
vector machine, the informative vector machine, and the sparse pseudo-input GP classifier.
name
synth
crabs
banana
Data set
train:test dim
250:1000
80:120
400:4900
breast-cancer 200:77
diabetes
468:300
flare-solar 666:400
german
700:300
heart
170:100
image
1300:1010
ringnorm
400:7000
splice
1000:2175
thyroid
140:75
titanic
150:2051
twonorm
400:7000
waveform
400:4600
2
5
2
9
8
9
20
13
18
20
60
5
3
20
21
GPC
SVM
IVM
SPGPC
err
nlp
err
#sv
err
nlp
M
err
nlp
M
0.097
0.039
0.105
0.288
0.231
0.346
0.230
0.178
0.027
0.016
0.115
0.043
0.221
0.031
0.100
0.227
0.096
0.237
0.558
0.475
0.570
0.482
0.423
0.078
0.071
0.281
0.093
0.514
0.085
0.229
0.098
0.168
0.106
0.277
0.226
0.331
0.247
0.166
0.040
0.016
0.102
0.056
0.223
0.027
0.107
98
67
151
122
271
556
461
92
462
157
698
61
118
220
148
0.096
0.066
0.105
0.307
0.230
0.340
0.290
0.203
0.028
0.016
0.225
0.041
0.242
0.031
0.100
0.235
0.134
0.242
0.691
0.486
0.628
0.658
0.455
0.082
0.101
0.403
0.120
0.578
0.085
0.232
150
60
200
120
400
550
450
120
400
100
700
40
100
300
250
0.087
0.043
0.107
0.281
0.230
0.338
0.236
0.172
0.031
0.014
0.126
0.037
0.231
0.026
0.099
0.234
0.105
0.261
0.557
0.485
0.569
0.491
0.414
0.087
0.089
0.306
0.128
0.520
0.086
0.228
4
10
20
2
2
3
4
2
200
2
200
6
2
2
10
linear models. In all cases, we employed the isotropic squared exponential kernel, avoiding here the
anisotropic version primarily to allow comparison with the SVM: lacking a probabilistic foundation,
its kernel parameters and regularization constant must be set by cross-validation. For the IVM,
hyperparameter optimization is interleaved with active set selection as described in [2], while for the
other GP models, we fit hyperparameters by gradient ascent on the estimated marginal likelihood,
limiting the process to twenty conjugate gradient iterations; we retained for testing that of three
to five randomly initialized models which the evidence most favoured. Results on the R?atsch data
for the semi-parametric radial basis function network are omitted for lack of space, but available at
the site given in footnote 5. In comparison with that model, SPGP tends to give sparser and more
accurate results (with the benefit of a sound Bayesian framework).
Identical tests were run for a range of active set sizes on the IVM and SPGP classifier, and we have
attempted to present the large body of results in its most comprehensible form: we list only the
sparsest competitive solution obtained. This means that using M smaller than shown tends to cause
a deterioriation in performance, but not that there is no advantage in increasing the value. After all,
as M ? N we expect error rates to match those of the full model (at least for the IVM, which
uses a subset of the training data).6 However, we believe that in exploring the behaviour of a sparse
model, the essential question is: what is the greatest sparsity we can achieve without compromising
performance? (since if sparsity were not an issue, we would simply revert to the original GP).
Small values of M for the FITC approximation were found to give remarkably low error rates, and
incremented singly would often give an improved approximation. In contrast, the IVM predictions
were no better than random guesses for even moderate M ?it usually failed if the active set was
smaller than a threshold around N/3, where it was simply discarding too much information?and
greater step sizes were required for noticeable improvements in performance. With a few exceptions
then, for FITC we explored small M , while for the IVM we used larger values, more widely spread.
More challenging is the task of discriminating 4s from non-4s in the USPS digit database: the data
are 256-dimensional, and there are 7291 training and 2007 test points. With 200 pseudo-inputs (and
51,200 parameters for optimization), error rates for SPGPC are 1.94%, with an average negative log
probability of 0.051 nats. These figures improve when the allocation is raised to 400 pseudo-inputs,
to 1.79% and 0.048 nats. When provided with only 200 points, the IVM figures are 9.97% and 0.421
nats?this can be regarded as a failure to generalize, since it corresponds to labelling all test inputs
as ?not 4??but given an active set of 400 it reaches error rates of 1.54% and NLP of 0.085 nats.
6
Note that the evidence is a poor metric for choosing M since it tends to increase monotonically as the
explicative power of the full GP is restored.
6
6 Discussion
A sparse approximation closely related to FITC is the ?deterministic training conditional? (DTC),
whose covariance consists solely of the low-rank term LMLT ; it has appeared elsewhere under
the name projected latent variables [13]. In generative terms, DTC first obtains a posterior process
by conditioning on the inducing inputs; observations y are then drawn as noisy samples of the
mean of this process. FITC is similar, but the draws are noisy samples from the posterior process
itself?hence, while the noise component for DTC is a constant corruption ? 2 , for FITC it grows
away from the inducing inputs to Knn +? 2 . In comparing their SPGP model with DTC, Snelson and
Ghahramani [1] suggest that it is for this reason (i.e. due to the diagonal component in the covariance
in FITC) that the optimization of pseudo-inputs by gradient ascent on the marginal likelihood can
succeed: without the noise reduction afforded locally by relocating pseudo-inputs, DTC does not
provide a sufficiently large gradient for them to move, and the optimization gets stuck. We believe
the same mechanism operates in general for non-Gaussian noise.
This difficulty would not be significant if alternative heuristics for building the active set greedily
were effective. We hypothesize however that the most informative vectors in the greedy sense of
the IVM tend to be those which lie close to the decision boundary. Such points will have a relatively strong influence on its shape since the effect of the kernel falls off exponentially in distance
squared. A preferable solution may be that empirically found to occur with Tipping?s relevance
vector machine (RVM) [15], a degenerate GP where a particular prior on weights means only a few
basis functions survive an evidence maximization procedure to form the model;7 there, the classifier was often parameterized by points distant from the decision boundary, suggested to be more
?representative? of the data.
We illustrate with a simple example that, provided the optimization is feasible, very sparse solutions
may more easily be found if the inducing inputs can be positioned independently of the data. This
allows the size of the active set to grow with the complexity of the problem, rather than with N , the
number of training points. We drew samples from a two-dimensional ?xor? problem, consisting of
four unit-variance Gaussian clusters at (?1.5, ?1.5) with a small overlap, giving an optimal error
rate of around 13% and in loose terms a complexity which requires an active set of size four. By
increasing the size of the training set N in increments from 40 to 400, we obtained the learning
curves of figure 1 for the IVM and FITC models: plotted against N is the size of active set required
for the error rate to fall below 15%. Whereas the FITC model requires a constant four points to
explain the data, the demands of the IVM appear to increase almost linearly with N .
Evidently, the FITC model is able to capture salient details more readily than the IVM, but we
may object that it is also a richer likelihood. We therefore show learning curves for the FITC
approximation run using the IVM active set and, generously, optimal kernel parameters. With a
relatively simple and low-dimensional problem, the benefit of the adaptable active set that FITC
offers is clearly less significant than that of the improved approximation itself?although there is
a factor of 2 difference, and we believe the effects will be more pronounced for more complex
data. However, a sensible compromise where optimization of all pseudo-inputs is computationally
infeasible is to run the IVM to obtain an initial active set, but then switch to the FITC approximation
and optimize only kernel parameters, or just a small selection of the pseudo-inputs. Another option,
explored by Snelson and Ghaharamani [17] for this model in the case of regression, is to learn a
low dimensional projection of the data?advantageous, since in this setting the pseudo-inputs only
operate under projection and can be treated as low-dimensional, potentially reducing significantly
the scale of the optimization problem. We report results of this extension in future work.
7 Conclusions
We have presented an efficient and numerically stable way of implementing the sparse FITC model
in Gaussian processes. By way of example we considered binary classification in which extra data
points are introduced to form a continuously adaptable active set. We have demonstrated that the
locations of these pseudo-inputs can be fit synchronously with parameters of the kernel, and that
7
We have not compared our model with the RVM since that approximation suffers from nonsensical variance
estimates away from the data. Rasmussen and Qui?nonero-Candela [16] show how it can be ?healed? through
augmentation, but the resulting model is no longer sparse in the sense of providing O(M 2 ) predictions.
7
150
FITC
IVM
IVM/FITC
Size of active set M
2
100
0
50
-2
8
4
0
40
200
Size of training set N
360
-2
0
2
Figure 1: Left: learning curves for the toy problem described in the text. Right: contours of posterior
probability for FITC in ten CG iterations from a random initialization of pseudo-inputs (black dots).
this procedure allows for very sparse solutions. Certain data sets, particularly those of very high
dimensionality, are not amenable to this approach since the number of hyperparameters is unfeasibly
large for non-linear optimization. In this case, we suggest resorting to a greedy approach, using a
fast heuristic like the IVM to build the active set, but adopting the FITC approximation thereafter.
An alternative which deserves investigation is to attempt an initial round of k-means clustering.
References
[1] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances
in Neural Information Processing Systems 18. MIT Press, 2005.
[2] Neil Lawrence, Matthias Seeger, and Ralf Herbrich. Fast sparse Gaussian process methods: the informative vector machine. In Advances in Neural Information Processing Systems 15. MIT Press, 2003.
[3] Manfred Opper and Ole Winther. Gaussian processes for classification: mean field methods. Neural
Computation, 12(11):2655?2684, 2000.
[4] Volker Tresp. A Bayesian committee machine. Neural Computation, 12(11):2719?2741, 2000.
[5] Alex Smola and Peter Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural
Information Processing Systems 13. MIT Press, 2001.
[6] Lehel Csat?o. Gaussian processes: iterative sparse approximations. PhD thesis, Aston University, 2002.
[7] Matthias Seeger. Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and
sparse approximations. PhD thesis, University of Edinburgh, 2003.
[8] Joaquin Qui?nonero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6(12):1939?1959, 2005.
[9] Carl Rasmussen and Christopher Williams. Gaussian processes for machine learning. MIT Press, 2006.
[10] Malte Kuss and Carl Edward Rasmussen. Assessing approximations for Gaussian process classification.
In Advances in Neural Information Processing Systems 18. MIT Press, 2005.
[11] Thomas Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts
Institute of Technology, 2001.
[12] Matthias Seeger.
Expectation propagation for exponential families, 2005.
Available from
http://www.cs.berkeley.edu/?mseeger/papers/epexpfam.ps.gz.
[13] Matthias Seeger, Christopher Williams, and Neil Lawrence. Fast forward selection to speed up sparse
Gaussian process regression. In Proceedings of the 9th International Workshop on AI Stats. Society for
Artificial Intelligence and Statistics, 2003.
[14] Brian Ripley. Pattern recognition and neural networks. Cambridge University Press, 1996.
[15] Michael E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1:211?244, 2001.
[16] Carl Edward Rasmussen and Joaquin Qui?nonero-Candela. Healing the relevance vector machine through
augmentation. In Proceedings of 22nd ICML. ACM Press, 2005.
[17] Edward Snelson and Zoubin Ghahramani. Variable noise and dimensionality reduction for sparse Gaussian processes. In Proceedings of the 22nd Annual Conference on Uncertainty in AI. AUAI Press, 2006.
8
| 3351 |@word version:1 inversion:2 advantageous:1 nd:2 nonsensical:1 bn:5 covariance:13 p0:8 decomposition:1 reduction:2 moment:7 initial:3 series:1 united:1 pub:1 pt0:4 mseeger:1 err:4 current:1 ida:1 comparing:1 must:5 readily:1 refresh:3 fn:12 tilted:2 numerical:1 informative:5 partition:2 shape:1 distant:1 hypothesize:1 update:7 rpn:1 greedy:4 generative:1 guess:1 intelligence:1 parameterization:1 flare:1 isotropic:1 kff:5 manfred:1 provides:2 complication:1 location:3 herbrich:1 firstly:1 sigmoidal:1 simpler:1 five:1 dn:10 consists:2 fitting:1 introduce:1 inappropriate:1 considering:1 prnn:1 increasing:2 begin:1 project:1 provided:2 maximizes:1 what:1 developed:1 finding:2 suite:1 pseudo:17 thorough:1 every:1 berkeley:1 hnew:2 auai:1 preferable:1 classifier:5 scaled:1 uk:2 normally:1 unit:1 omit:1 appear:3 enjoy:1 yn:7 before:1 positive:2 tends:4 limit:1 subscript:1 solely:1 advised:1 black:1 zeroth:1 initialization:3 married:1 precomputations:1 challenging:1 ringnorm:1 factorization:2 range:2 testing:1 practice:1 definite:2 digit:1 procedure:4 significantly:2 matching:2 projection:2 radial:1 suggest:3 zoubin:2 get:1 marginalize:1 selection:7 close:1 applying:1 influence:2 writing:1 www:2 equivalent:1 deterministic:1 demonstrated:2 dz:1 optimize:1 williams:2 regardless:1 rt0:2 independently:2 assigns:1 stats:2 rule:2 regarded:1 ralf:1 justification:1 increment:1 updated:1 limiting:1 suppose:1 exact:2 gps:1 us:1 carl:4 diabetes:1 element:1 recognition:1 particularly:1 updating:1 persist:1 database:1 observed:1 ep:11 capture:1 region:1 cycle:1 incremented:1 intuition:1 environment:1 complexity:5 nats:4 ideally:1 cam:1 spgp:7 reviewing:1 compromise:1 predictive:2 basis:3 usps:1 accelerate:1 joint:4 htm:1 easily:1 grown:1 train:1 revert:1 fast:4 describe:2 effective:1 monte:1 unfeasibly:1 ole:1 artificial:1 choosing:1 h0:1 whose:2 heuristic:3 widely:2 valued:1 larger:1 richer:1 ability:1 statistic:1 knn:1 neil:2 gp:13 noisy:2 itself:2 final:1 superscript:1 advantage:4 evidently:1 matthias:4 product:4 relevant:1 nonero:4 rapidly:1 degenerate:1 achieve:1 description:1 inducing:12 pronounced:1 convergence:3 cluster:1 p:1 assessing:1 generating:1 object:1 derive:2 andrew:1 ac:2 illustrate:1 noticeable:1 edward:5 strong:1 auxiliary:1 c:1 uu:5 waveform:1 closely:1 correct:1 compromising:2 dtc:5 implementing:1 behaviour:1 wellsuited:1 generalization:1 investigation:1 brian:1 secondly:1 extension:2 exploring:1 hold:2 crab:2 around:2 sufficiently:1 considered:1 great:2 lawrence:3 m0:3 omitted:2 applicable:1 label:2 rvm:2 faithfully:1 exceptional:1 sidestepped:1 mit:5 generously:1 clearly:1 gaussian:30 rather:3 avoid:2 pn:1 fluctuate:1 factorizes:1 volker:1 refining:1 improvement:2 modelling:1 likelihood:15 rank:4 contrast:3 seeger:5 cg:1 ept:1 sense:3 greedily:1 kfu:3 inference:2 dim:1 prt:6 accumulated:1 holden:1 lehel:1 initially:1 dnn:1 expand:1 fhg:1 interested:1 issue:1 classification:10 flexible:1 orientation:2 art:1 raised:1 marginal:10 field:1 construct:1 identical:1 represents:1 survive:1 icml:1 future:1 report:1 guzman:1 few:3 primarily:1 randomly:2 preserve:1 divergence:1 variously:1 consisting:1 attempt:1 interest:4 fd:1 evaluation:1 sixty:1 yielding:1 amenable:1 accurate:4 kt:2 closer:1 necessary:3 old:1 initialized:1 plotted:1 d2n:2 retains:1 zn:6 maximization:1 deserves:1 tractability:1 cost:2 subset:3 rare:1 kq:2 examining:1 conducted:1 too:1 corrupted:1 sv:1 density:1 winther:1 international:1 twonorm:1 discriminating:1 probabilistic:1 off:1 cr0:4 together:1 continuously:1 michael:1 again:2 squared:2 augmentation:2 thesis:3 choose:1 possibly:1 hn:2 resort:1 derivative:6 leading:1 toy:1 account:2 de:1 summarized:1 view:1 candela:4 competitive:1 bayes:3 maintains:1 complicated:1 option:1 solar:1 formed:1 accuracy:2 xor:1 variance:3 efficiently:1 correspond:1 yield:1 conceptually:1 generalize:1 weak:1 bayesian:7 iterated:1 carlo:1 liable:1 drive:1 corruption:1 kuss:1 footnote:1 explain:1 reach:1 suffers:1 failure:1 against:1 frequency:1 minka:1 thereof:1 associated:2 refreshed:1 gain:1 popular:2 massachusetts:1 dimensionality:3 sean:1 positioned:1 adaptable:2 originally:1 higher:2 tipping:2 follow:2 improved:2 ox:1 furthermore:1 just:1 stage:1 smola:1 until:1 joaquin:2 christopher:2 propagation:3 incrementally:1 lack:1 abort:1 believe:3 grows:1 building:2 name:2 effect:2 true:1 hence:3 regularization:1 excluded:1 laboratory:1 rpt:6 round:1 interchangeably:1 during:1 unnormalized:1 generalized:1 occasion:2 complete:1 tn:4 image:1 snelson:7 consideration:1 recently:1 common:1 rotation:1 functional:1 empirically:1 conditioning:2 exponentially:1 anisotropic:1 numerically:2 marginals:2 refer:1 significant:2 cambridge:3 imposing:1 ai:2 consistency:1 resorting:1 inclusion:4 dot:1 stable:4 longer:1 pkq:1 add:1 posterior:22 moderate:1 termed:1 certain:1 binary:3 minimum:1 additional:1 greater:3 kub:1 care:1 employed:1 r0:12 recommended:1 monotonically:2 semi:1 signal:1 downdate:3 full:6 sound:2 infer:1 d0:6 multiple:1 ad0:1 match:3 calculation:3 offer:2 long:1 cross:1 divided:1 post:3 equally:1 prediction:10 regression:5 breast:1 expectation:3 df:2 metric:1 iteration:6 represent:1 kernel:7 adopting:1 achieved:1 preserved:1 whereas:1 remarkably:1 grow:1 qab:1 suffered:1 extra:1 operate:1 unlike:1 ascent:3 tend:1 consult:1 ee:1 presence:1 naish:1 variety:1 xj:1 fit:5 independence:1 marginalization:1 architecture:1 switch:1 idea:1 cn:2 bottleneck:1 t0:3 expression:1 bartlett:1 peter:1 kuu:4 cause:2 remark:1 chol:4 gpc:1 detailed:1 singly:1 rival:1 ten:2 locally:1 http:3 specifies:1 estimated:1 relocating:1 per:1 csat:1 hyperparameter:1 epexpfam:1 thereafter:1 four:3 pnew:4 threshold:2 salient:1 drawn:1 cb3:1 asymptotically:1 sum:1 run:4 inverse:2 parameterized:1 uncertainty:1 family:5 almost:1 draw:1 decision:2 qui:4 entirely:1 bound:1 interleaved:2 pay:1 followed:1 guaranteed:1 fold:1 annual:1 occur:1 alex:1 afforded:1 dominated:1 thyroid:1 speed:1 min:1 concluding:1 ptn:2 relatively:3 poor:1 conjugate:1 terminates:1 smaller:3 explicative:1 restricted:1 taken:1 heart:1 computationally:1 discus:2 german:1 fail:1 mechanism:1 loose:1 committee:1 letting:1 tractable:2 adopted:1 available:4 operation:2 decomposing:1 apply:1 away:2 alternative:3 slower:1 dnew:4 comprehensible:1 original:2 thomas:1 denotes:2 remaining:2 running:1 ensure:1 nlp:4 clustering:1 unifying:1 giving:2 ghahramani:6 build:1 society:1 move:1 added:1 already:1 occurs:1 question:1 parametric:2 restored:1 rt:2 traditional:1 diagonal:3 visiting:1 exhibit:1 gradient:6 distance:1 sensible:1 argue:1 reason:1 length:1 pointwise:1 retained:1 providing:3 kingdom:1 potentially:1 synth:2 negative:2 implementation:1 twenty:1 perform:1 allowing:2 upper:1 observation:3 benchmark:3 finite:1 extended:1 communication:1 banana:1 locate:2 synchronously:1 omission:1 inferred:1 introduced:1 inverting:1 pair:1 required:3 kl:2 learned:1 allied:1 able:1 bar:1 suggested:1 below:3 usually:4 rnew:6 pattern:1 appeared:2 sparsity:3 reliable:1 including:1 belief:1 power:2 critical:1 greatest:1 natural:3 difficulty:1 overlap:1 treated:1 malte:1 fitc:26 scheme:2 improve:2 aston:1 brief:2 titanic:1 technology:1 gz:1 tresp:1 text:1 prior:18 pseudoinputs:1 marginalizing:3 asymptotic:1 lacking:1 fully:1 probit:3 bear:1 expect:2 loss:3 mixed:1 allocation:1 validation:1 foundation:2 degree:1 share:1 cancer:1 course:1 summary:1 changed:1 placed:1 elsewhere:1 rasmussen:6 infeasible:1 bias:1 allow:2 institute:1 fall:2 taking:1 sparse:24 edinburgh:1 benefit:3 regard:1 distributed:1 boundary:2 xn:1 gram:1 curve:3 contour:1 kau:1 stuck:1 made:2 opper:1 projected:1 forward:1 approximate:5 obtains:1 eet:4 cavity:2 global:1 active:19 anew:4 assumed:1 discriminative:2 xi:1 ripley:2 disrupt:1 latent:5 iterative:2 table:2 scratch:1 learn:1 robust:1 obtaining:2 du:4 cl:1 complex:1 domain:2 diag:4 refreshing:1 dense:1 spread:1 linearly:1 noise:9 ded:2 hyperparameters:3 x1:1 body:1 site:19 representative:1 en:1 precision:7 favoured:1 inferring:1 guiding:1 sparsest:2 deterministically:1 exponential:5 explicit:2 lie:1 clamped:1 qff:4 splice:1 theorem:1 discarding:1 emphasized:1 showing:1 pac:1 list:1 explored:2 svm:3 evidence:5 intractable:2 incorporating:1 essential:1 workshop:1 gained:1 drew:1 phd:3 labelling:1 demand:2 sparser:1 simply:2 likely:1 forming:1 failed:1 corresponds:1 ivm:21 minimizer:1 acm:1 succeed:1 conditional:3 identity:1 ghaharamani:1 ann:2 careful:2 price:1 feasible:1 change:3 included:1 generalisation:1 operates:1 reducing:1 lemma:1 called:1 pas:1 experimental:3 attempted:1 atsch:3 indicating:1 select:1 exception:1 support:2 cholesky:5 latter:2 relevance:3 evaluate:1 bench:1 avoiding:1 |
2,594 | 3,352 | A Randomized Algorithm for Large Scale Support
Vector Learning
Krishnan S.
Department of Computer Science and Automation, Indian Institute of Science, Bangalore-12
krishi@csa.iisc.ernet.in
Chiranjib Bhattacharyya
Department of Computer Science and Automation, Indian Institute of Science, Bangalore-12
chiru@csa.iisc.ernet.in
Ramesh Hariharan
Strand Genomics, Bangalore-80
ramesh@strandls.com
Abstract
This paper investigates the application of randomized algorithms for large scale
SVM learning. The key contribution of the paper is to show that, by using ideas
random projections, the minimal number of support vectors required to solve almost separable classification problems, such that the solution obtained is near
optimal with a very high probability, is given by O(log n); if on removal of properly chosen O(log n) points the data becomes linearly separable then it is called
almost separable. The second contribution is a sampling based algorithm, motivated from randomized algorithms, which solves a SVM problem by considering
subsets of the dataset which are greater in size than the number of support vectors
for the problem. These two ideas are combined to obtain an algorithm for SVM
classification problems which performs the learning by considering only O(log n)
points at a time. Experiments done on synthetic and real life datasets show that the
algorithm does scale up state of the art SVM solvers in terms of memory required
and execution time without loss in accuracy. It is to be noted that the algorithm
presented here nicely complements existing large scale SVM learning approaches
as it can be used to scale up any SVM solver.
1
Introduction
Consider a training dataset D = {(xi , yi )}, i = 1 . . . n and yi = {+1, ?1}, where xi ? Rd are data
points and yi specify the class labels. the problem of learning the classifier, y = sign(w T x + b),
can be narrowed down to computing {w, b} such that it has good generalization ability. The SVM
formulation for classification, which will be called C ? SV M , for determining {w, b} is given by
[1]
C-SVM-1:
n
X
1
M inimize(w,b,?) ||w||2 + C
?i
2
i=1
Subject to : yi (w ? xi + b) ? 1 ? ?i , , ?i ? 0, i = 1 . . . n
X
At optimality w is given by w =
?i yi x i , 0 ? ? i ? C
i:?i >0
1
Consider the set S = {xi |?i > 0}; the elements of this set are called the Support vectors. Note
that S completely determines the solution of C ? SV M .The set S may not be unique, though w is.
Define a parameter ? to be the minimum cardinality over all S. See that ? does not change with
number of examples, n, and is often much less than n.
More generally, the C ? SV M problem can be seen as an instance of Abstract optimization problem(AOP) [2, 3, 4]. An AOP is defined as follows:
An AOP is a triple (H, <, ?) where H is a finite set, < a total ordering on 2H , and ? an oracle
that, for a given F ? G ? H, either reports F = min< F 0 |F 0 ? G or returns a set F 0 ? G with
F0 < F.
Many SVM learning problems are AOP problems; algorithms developed for AOP problems can be
used for solving SVM problems. Every AOP has a combinatorial dimension associated with it; the
combinatorial dimension captures the notion of number of free variables for that AOP. An AOP can
be solved by a randomized algorithm by selecting subsets of size greater than the combinatorial
dimension of the problem [2].
For SVM, ? is the combinatorial dimension of the problem; by iterating over subsets of size greater
than ?, the subsets chosen using random sampling, the problem can be solved efficiently [3, 4]; this
algorithm was called RandSVM by the authors. Apriori the value of ? is not known, but for linearly
separable classification problems the following holds: 2 ? ? ? d + 1. This follows from the fact
that the dual problem is the minimum distance between 2 non-overlapping convex hulls[5]. When
the problem is not linearly separable, the authors use the reduced convex hull formulation [5] to
come up with an estimate of the combinatorial dimension; this estimate is not very clear and much
higher than d1 . The algorithm RandSVM2 iterates over subsets of size proportional to ?2 .
RandSVM is not practical because of the following reasons: the sample size is too large in case of
high dimensional datasets, the dimension of feature space is usually unknown when using kernels,
and the reduced convex hull method used to calculate the combinatorial dimension, when the data is
not separable in the feature space, isn?t really useful as the number obtained is very large.
This work overcomes the above problems using ideas from random projections[6, 7] and randomized algorithms[8, 9, 2, 10],. As mentioned by the authors of RandSVM, the biggest bottleneck
in their algorithm is the value of ? as it is too large. The main contribution is, using ideas from
random projections, the conjecture that if RandSVM is solved using ? equal to O(log n), then the
solution obtained is close to optimal with high probability(Theorem 3), in particular for almost
separable datasets. Almost separable datasets are those which become linearly separable when a
small number of properly chosen data points are deleted from them. The second contribution is an
algorithm which, using ideas from randomized algorithms for Linear Programming(LP), solves the
SVM problem by using samples of size linear in ?. This work also shows that the theory can be
applied to non-linear kernels.
2
A NEW RANDOMIZED ALGORITHM FOR CLASSIFICATION
This section uses results from random projections, and randomized algorithms for linear programming, to develop a new algorithm for learning large scale SVM problems. In Section 2.1, we discuss
the case of linearly separable data and estimate the number of support vectors required such that the
margin is preserved with high probability, and show that this number is much smaller than the data
dimension d, using ideas from random projections. In Section 2.2, we look how the analysis applies
to almost separable data and present the main result of the paper(Theorem 2.2). The section ends
with a discussion on the application of the theory to non-linear kernels. In Section 2.3, we present
shows the randomized algorithm from SVM learning.
2.1
Linearly separable data
We start with determining the dimension k of the target space such that on performing a random projection to the space, the Euclidean distances and dot products are preserved. The appendix contains
a few results from random projections which will be used in this section.
1
2
Details of this calculation are present in the supplementary material
Presented in supplementary material
2
For a linearly separable dataset D = {(xi , yi ), i = 1, . . . , n}, xi ? Rd , yi ? {+1, ?1}, the C-SVM
formulation is the same as C-SVM-1 with ?i = 0, i = 1 . . . n. By dividing all the constraints by
||w||, the problem can be reformulated as follows:
C-SVM-2a:
M aximize(w,b,l)
l; Subject to : yi (w
? ? xi + ?b) ? l, i = 1 . . . n, ||w||
? =1
?
w
b
1
where w
? = ||w||
, ?b = ||w||
, and l = ||w||
. l is the margin induced by the separating hyperplane,
that is, it is the distance between the 2 supporting hyperplanes, h1 : yi (w ? xi + b) = 1 and
h2 : yi (w ? xi + b) = ?1.
The determination of k proceeds as follows. First, for any given value of k, we show the change in
the margin as a function of k, if the data points are projected onto the k dimensional subspace and
the problem solved. From this, we determine the value k(k << d) which will preserve margin with
a very high probability. In a k dimensional subspace, there are at the most k + 1 support vectors.
Using the idea of orthogonal extensions(definition appears later in this section), we prove that when
the problem is solved in the original space, using an estimate of k + 1 on the number of support
vectors, the margin is preserved with a very high probability.
Let w0 and x0i , i = 1, . . . , n be the projection of w
? and xi , i = 1, . . . , n respectively onto a k
dimensional subspace (as in Lemma 2, Appendix A). The classification problem in the projected
space with the dataset being D 0 = {(x0i , yi ), i = 1, . . . , n}, x0i ? Rk , yi ? {+1, ?1} can be written
as follows:
C-SVM-2b:
M aximize(w0 ,?b,l0 ) l0 ; Subject to : yi (w0 ? x0i + ?b) ? l0 , i = 1 . . . n, ||w 0 || ? 1
where l0 = l(1 ? ?), ? is the distortion and 0 < ? < 1. The following lemma predicts, for a given
value of ?, the k such that the margin is preserved with a high probability upon projection. be solved
with the optimal margin obtained close to the optimal margin for the original problem is given by
the following lemma.
Theorem 1. Let L = max||xi || and (w ? , b? , l? ) be the optimal solution for C-SVM-2a. Let R be
T
T ?
a random d ? k matrix as given in Lemma 2(Appendix A). Let w
e = R?w
and x0i = R?kxi , i =
k
2
) 2
4n
1, . . . , n and k ? ?82 (1 + (1+L
2l? ) log ? , 0 < ? < 1, 0 < ? < 1, then the following bound holds
on the optimal margin lP obtained by solving the problem C-SVM-2b:
P (lP ? l? (1 ? ?)) ? 1 ? ?
Proof. From Corollary 1 of Lemma 2(Appendix A), we have
w? ? xi ? (1 + L2 ) ? w
e ? x0i ? w? ? xi + (1 + L2 )
2
2
2k
which holds with probability at least 1 ? 4e? 8 , for some > 0. Consider some example xi with
2k
yi = 1. Then the following holds with probability at least 1 ? 2e? 8
w
e ? x0i + b? ? w? ? xi ? (1 + L2 ) + b? ? l? ? (1 + L2 )
2
2
w?x
e 0 +b?
l? ? (1+L2 )
i
2
Dividing the above by ||w||,
e we have ||w||
?
. Note that from Lemma
e
||w||
e
?
1(Appendix A), we have (1 ? )||w || ? ||w||
e
? (1 + )||w ? ||, with probability
?
?
2k
at least 1 ? 2e? 8 .
Since ||w ? || = 1, we have
1 ? ? ||w||
e
?
1 + .
Hence we have
w
e ? x0i + b?
||w||
e
l? ? 2 (1 + L2 )
?
1+
?
?
?
? (l ? (1 + L2 ))( 1 ? ) = l? (1 ? ? (1 + L2 )( 1 ? ))
2
2l
?
1 + L2
2
?
?
? l ( 1 ? ? ? (1 + L )) = l (1 ? (1 +
))
2l
2l?
?
3
2k
This holds with probability at least 1 ? 4e? 8 . A similar result can be derived for a point xj for
which yj = ?1. The above analysis guarantees that by projecting onto a k dimensional space, there
w
e
b?
?
exists at least one hyperplane ( ||w||
e , ||w||
e ), which guarantees a margin of l (1 ? ?) where
1 + L2
)
(1)
2l?
2k
with probability at least 1 ? n4e? 8 . The margin obtained by solving the problem C-SVM-2b, lP
can only be better than this. So the value of k is given by:
? ? (1 +
n4e
?
?2
(1+ 1+L
2l?
k
2 2 8
)
?? ? k?
8(1 +
(1+L2 ) 2
2l? )
?2
log
4n
?
(2)
As seen above, by randomly projecting the points onto a k dimensional subspace, the margin is
preserved with a high probability. This result is similar to the results obtained in work on random
projections[7]. But there are fundamental differences between the method proposed in this paper
and the previous methods: No random projection is actually done here, and no black box access
to the data distribution is required. We use Theorem 1 to determine an estimate on the number of
support vectors such that margin is preserved with a high probability, when the problem is solved
in the original space. This is given in Theorem 2 and is the main contribution of this section. The
theorem is based on the following fact: in a k dimensional space, the number of support vectors
is upper bounded by k + 1. We show that this k + 1 can be used as an estimate of the number of
support vectors in the original space such that the solution obtained preserves the margin with a high
probability. We start with the following definition.
Definition An orthogonal extension of a k ? 1-dimensional flat( a k ? 1 dimensional flat
is a k ? 1-dimensional affine space) hp = (wp , b), where wp = (w1 , . . . , wk ), in a subspace Sk
of dimension k to a d ? 1-dimensional hyperplane h = (w,
e b) in d-dimensional space, is defined
as follows. Let R ? Rd?d be a random projection matrix as in Lemma 2((Appendix A)). Let
? ? Rd?k be a another random projection matrix which consists of only the the first k columns of
R
?T
? xi as follows: Let wp = (w1 , . . . , wk ) be the optimal hyperplane
R. Let x?i = RT xi and x0i = R
k
classifier with margin lP for the points x01 , . . . , x0n in the k dimensional subspace. Now define w
e
to be all 0?s in the last d ? k coordinates and identical to wp in the first k coordinates, that is,
w
e = (w1 , . . . , wk , 0, . . . , 0). Orthogonal extensions have the following key property. If (wp , b) is a
separator with margin lp for the projected points, then its orthogonal extension (w,
e b) is a separator
with margin lp for the original points,that is,
if, yi (wp ? x0i + b) ? l, i = 1, . . . , n then yi (w
e ? x?i + b) ? l, i = 1, . . . , n
An important point to note, which will be required when extending orthogonal extensions to nonlinear kernels, is that dot products between the points are preserved upon doing orthogonal projec0
tions, that is, x0T
?i T x?j .
i xj = x
Let L, l? , ?, ? and n be as defined in Theorem 1. The following is the main result of this section.
2
) 2
4n
Theorem 2. Given k ? ?82 (1 + (1+L
2l? ) log ? and n training points with maximum norm L in d
dimensional space and separable by a hyperplane with margin l ? , there exists a subset of k 0 training
points x10 . . . xk0 where k 0 ? k and a hyperplane h satisfying the following conditions:
1. h has margin at least l? (1 ? ?) with probability at least 1 ? ?
2. x10 . . . xk0 are the only training points which lie either on h1 or on h2
Proof. Let w ? , b? denote the normal to a separating hyperplane with margin l ? , that is, yi (w? ? xi +
b? ) ? l? for all xi and ||w? || = 1. Consider a random projection of x1 , . . . , xn to a k dimensional
?
space and let w 0 , z1 , . . . , zn be the projections of w ? , x1 , . . . , xn , respectively, scaled by 1/ k. By
Theorem 1, yi (w0 ? zi + b? ) ? l? (1 ? ?) holds for all zi with probability at least 1 ? ?. Let h be the
orthogonal extension of w 0 , b? to the full d dimensional space. Then h has margin at least l ? (1 ? ?),
as required. This shows the first part of the claim.
To prove the second part, consider the projected training points which lie on w 0 , b? (that is, they lie
on either of the two sandwiching hyperplanes). Barring degeneracies, there are at the most k such
points. Clearly, these will be the only points which lie on the orthogonal extension h, by definition.
4
From the above analysis, it is seen that if k << d, then we can estimate that the number of support
vectors is k + 1, and the algorithm RandSVM would take on average O(k log n) iterations to solve
the problem [3, 4].
2.2
Almost separable data
In this section, we look at how the above analysis can be applied to almost separable datasets. We
call a dataset almost separable if by removing a fraction ? = O( logn n ) of the points, the dataset
becomes linearly separable.
The C-SVM formulation when the data is not linearly separable(and almost separable) was given in
C-SVM-1. This problem can be reformulated as follows:
M inimize(w,b,?)
n
X
?i
i=1
1
l
This formulation is known as the Generalized Optimal Hyperplane formulation. Here l depends on
the value of C in the C-formulation. At optimality, the margin l ? = l. The following theorem proves
a result for almost separable data similar to the one proved in Claim 1 for separable data.
Subject to : yi (w ? xi + b) ? l ? ?i , ?i ? 0, i = 1 . . . n; ||w|| ?
Theorem 3. Given k ?
8
? 2 (1
+
(1+L2 ) 2
2l? )
log
4n
?
+ ?n, l? being the margin at optimality, l the
lower bound on l? as in the Generalized Optimal Hyperplane formulation and ? = O( logn n ), there
exists a subset of k 0 training points x10 . . . xk0 , k 0 ? k and a hyperplane h satisfying the following
conditions:
1. h has margin at least l(1 ? ?) with probability at least 1 ? ?
2. At the most
8(1+
(1+L2 ) 2
)
2l?
?2
log
4n
?
points lie on the planes h1 or on h2
3. x10 , . . . , xk0 are the only points which define the hyperplane h, that is, they are the support
vectors of h.
? ? ?
Proof. Let
Xthe optimal solution for the generalized optimal hyperplane formulation be (w , b , ? ).
1
?
?
w =
?i yi xi , and l = ||w? || as mentioned before. The set of support vectors can be split
i:?i >0
into to 2 disjoint sets,SV1 = {xi : ?i > 0 and ?i? = 0}(unbounded SVs), and SV2 = {xi : ?i >
0 and ?i? > 0}(bounded SVs).
Now, consider removing the points in SV2 from the dataset. Then the dataset becomes linearly
separable with margin l? . Using an analysis similar to Theorem 1, and the fact that l ? ? l, we have
the proof for the first 2 conditions.
When all the points in SV2 are added back, at most all these points are added to the set of support
vectors and the margin does not change. The margin not changing is guaranteed by the fact that for
proving the conditions 1 and 2, we have assumed the worst possible margin, and any value lower
than this would violate the constraints of the problem. This proves condition 3.
Hence the number of support vectors, such that the margin is preserved with high probability, can
be upper bounded by
k+1=
8
(1 + L2 ) 2
4n
8
(1 + L2 ) 2
4n
(1 +
) log
+ ?n + 1 = 2 (1 +
) log
+ O(log n)
2
?
?
2l
?
?
2l?
?
0
(3)
Using a non-linear kernel Consider a mapping function ? : R d ? Rd , d0 > d, which projects
0
0
a point xi ? Rd to a point zi ? Rd , where Rd is a Euclidean space. Let the points be projected
onto a random k dimensional subspace as before. Then, as in the case of linear kernels, the lemmata
in the appendix are applicable to these random projections[11]. The orthogonal extensions can be
5
considered as a projection from the k dimensional space to the ?-space, such that the kernel function
values are preserved. Then it can be shown that Theorem 3 applies when using non-linear kernels
also.
2.3
A Randomized Algorithm
The reduction in the sample size from 6d2 to 6k 2 is not enough to make RandSVM useful
in practice as 6k 2 is still a large number. This section presents another randomized algorithm
which only requires that the sample size be greater than the number of support vectors. Hence
a sample size linear in k can be used in the algorithm. This algorithm was first proposed to
solve large scale LP problems[10]; it has been adapted for solving large scale SVM problems.
Algorithm 1 RandSVM-1(D,k,r)
Require: D - The dataset.
Require: k - The estimate of the number of support vectors.
Require: r - Sample size = ck, c > 0.
1: S = randomsubset(D, r); // Pick a random subset, S, of size r from the dataset D
2: SV = svmlearn(?, S); // SV - set of support vectors obtained by solving the problem S
3: V = {x ? D?S|violates(x, SV )} //violator - nonsampled point not satisfying KKT conditions
4: while |V | > 0 and |SV | < k do
5:
R = randomsubset(V , r ? |SV |); //Pick a random subset from the set of violators
6:
SV = svmlearn(SV, R); //SV - set of support vectors obtained by solving the problem SV ? R
7:
V = {x ? D ? (SV ? R)|violates(x, SV )}; //Determine violators from nonsampled set
8: end while
9: return SV
Proof of Convergence: Let SV be the current set of support vectors. Condition |SV | < k comes
from Theorem 3. Hence if the condition is violated, then the algorithm terminates solution which
is near optimal with a very high probability.
Now consider the case where |SV | < k and |V | > 0. Let xi be a violator(xi is a non-sampled
point such that yi (wT xi + b) < 1). Solving the problem with the set of constraints as SV ? xi will
only result, since SVM is an instance of AOP, in the increase(decrease) of the objective function
of the primal(dual). As there are only finite number of basis for an AOP, the algorithm is bound to
terminate; also if termination happens with the number of violators equal to zero, then the solution
obtained is optimal.
Determination of k The value of k depends on the l which is not available in case of C-SVM and
nu-SVM. This can be handled only be solving for k as a function of where is the maximum allowed distortion in the L2 norms of the vectors upon projection. If all the data points are normalized
2
to length 1, that is, L = 1, then Equation 1 becomes ? ?/(1 + 1+L
2l? ). Combining this with the
result from Theorem 2, the value of k can be determined in terms of as follows:
k?
3
8
(1 + L2 ) 2
4n
16
(1 + L2 ) 2
4n
16
4n
(1
+
)
log
+
O(log
n)
?
(1
+
) log
) ? 2 log
?2
2l?
?
?2
2l?
?
?
(4)
Experiments
This section discusses the performance of RandSVM in practice. The experiments were performed
on 3 synthetic and 1 real world dataset. RandSVM was used with LibSVM as the solver when using
a non-linear kernel; with SVMLight for a linear kernel. This choice was made because it was observed that SVMLight is much faster than LibSVM when using a linear kernel, and vice-versa when
using non-linear kernels. RandSVM has been compared with state of the art SVM solvers: LibSVM
for non-linear kernels, and SVMPerf and SVMLin for linear kernels.
Synthetic datasets
The twonorm dataset is a 2 class problem where each class is drawn from a multivariate normal distribution with unit variance. Each vector is a 20 dimensional vector.pOne class has mean
(a, a, . . . , a), and the other class has mean (?a, ?a, . . . , ?a), where a = 2/ (20).
The ringnorm dataset is a 2 class problem with each vector consisting of 20 dimensions. Each class
6
Category
twonorm1
twonorm2
ringnorm1
ringnorm2
checkerboard1
checkerboard2
CCAT?
C11?
Kernel
Gaussian
Gaussian
Gaussian
Gaussian
Gaussian
Gaussian
Linear
Linear
RandSVM
300 (94.98%)
437 (94.71%)
2637 (70.66%)
4982 (65.74%)
406 (93.70%)
814 (94.10%)
345 (94.37%)
449 (96.57%)
LibSVM
8542 (96.48%)
256 (70.31%)
85124 (65.34%)
1568.93 (96.90%)
X
X
SVMPerf
X
X
X
X
X
X
148 (94.38%)
120 (97.53%)
SVMLin
X
X
X
X
X
X
429(95.1913%)
295 (97.71%)
Table 1: The table gives the execution time(in seconds) and the classification accuracy(in brackets).
The subscripts 1 and 2 indicate that the corresponding training set sizes are 10 5 and 106 respectively.
A ?-? indicates that the solver did not finish execution even after a running for a day. A ?X? indicates
that the experiment is not applicable for the corresponding solver. The ??? indicates that the solver
used with RandSVM was SVMLight; otherwise it was LibSVM.
is drawn from a multivariate normal distribution. One class has mean 1, and covariance
p 4 times the
identity. The other class has mean (a, a, . . . , a), and unit covariance where a = 2/ (20).
The checkerboard dataset consists of vectors in a 2 dimensional space. The points are generated in
a 4 ? 4 grid. Both the classes are generated from a multivariate uniform distribution; each point is
(x1 = U (0, 4), x2 = U (0, 4)). The points are labelled as follows - if(dx1e%2 6= dx2e%2), then the
point is labelled negative, else the point is labelled positive.
For each of the synthetic datasets, a training set of 10,00,000 points and a test set of 10,000 points
was generated. A smaller subset of 1,00,000 points was chosen from training set for parameter tuning. From now on, the smaller training set will have a subscript of 1 and the larger training set will
have a subscript of 2, for example, ringnorm1 and ringnorm2 .
Real world dataset
The RCV1 dataset consists of 804,414 documents, with each document consisting of 47,236 features. Experiments were performed using 2 categories of the dataset - CCAT and C11. The dataset
was split into a training set of 7,00,000 documents and a test set of 104,414 documents.
Table 1 shows the kernels which were used for each of the datasets. The parameters used for the
gaussian kernels, ? and C, were obtained using grid search based tuning. The parameter for the
linear kernel, C, for CCAT and C11 were obtained from previous work done[12].
Selection of k for RandSVM: The values of and ? were fixed to 0.2 and 0.9 respectively, for all
the datasets. For linearly separable datasets, k was set to (16 log(4n/?))/2 . For the others, k was
set to (32 log(4n/?))/2 .
Discussion of results: Table 1, which has the timing and classification accuracy comparisons, shows
that RandSVM can scale up SVM solvers for very large datasets. Using just a small wrapper around
the solvers, RandSVM has scaled up SVMLight so that its performance is comparable to that of
state of the art solvers such as SVMPerf and SVMLin. Similarly LibSVM has been made capable of
quickly solving problems which it could not do before, even after executing for a day. Furthermore,
it is clear, from the experiments on the synthetic datasets, that the execution times taken for training
with 105 examples and 106 examples are not too far apart; this is a clear indication that the execution
time does not increase rapidly with the increase in the dataset size.
All the runs of RandSVM terminated with the condition |SV | < k being violated. Since the classification accuracies obtained by using RandSVM and the baseline solvers are very close, it is clear
that Theorem 2 holds in practice.
4
Further Research
It is clear from the experimental evaluations that randomized algorithms can be used to scale up
SVM solvers to large scale classification problems. If an estimate of the number of support vectors
is obtained then algorithm RandSVM-1 can be used for other SVM learning problems also, as they
are usually instances of an AOP. The future work would be to apply the work done here to such
problems.
7
A
Some Results from Random Projections
Here we review a few lemmas from random projections [7]. The following lemma discusses how
the L2 norm of a vector is preserved when it is projected on a random subspace.
Lemma 1. Let R = (rij ) be a random d ? k matrix, such that each entry (rij ) is chosen indepenT
dently according to N (0, 1). For any fixed vector u ? Rd , and any > 0, let u0 = R?ku . Then
E[||u0 ||2 ] = ||u||2 and the following bound holds:
P ((1 ? )||u||2 ? ||u0 ||2 ? (1 + )||u||2 ) ? 1 ? 2e?(
2
?3 ) k
4
The following theorem and its corollary show the change in the Euclidean distance between 2 points
and the dot products when they are projected onto a lower dimensional space [7].
T
T
Lemma 2. Let u, v ? Rd . Let u0 = R?ku and v 0 = R?ku be the projections of u and v to Rk via a
random matrix R whose entries are chosen independently from N (0, 1) or U (?1, 1). Then for any
> 0, the following bounds hold
P ((1 ? )ku ? vk2 ? ku0 ? v 0 k2 )
P (ku0 ? v 0 k2 ? (1 + )ku ? vk2 )
? 1 ? e?(
? 1 ? e?(
2
?3 ) k
4
2
?3 ) k
4
, and
A corollary of the above theorem shows how well the dot products are preserved upon projection(This is a slight modification of the corollary given in [7]).
Corollary 1. Let u, v be vectors in Rd s.t. kuk ? L1 , kvk ? L2 . Let R be a random matrix whose
T
T
entries are chosen independently from either N (0, 1) or U (?1, 1). Define u0 = R?ku and v 0 = R?kv .
2k
Then for any > 0, the following holds with probability at least 1 ? 4e? 8
u ? v ? (L21 + L22 ) ? u0 ? v 0 ? u ? v + (L21 + L22 )
2
2
References
[1] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
[2] Bernd Gartner. A subexponential algorithm for abstract optimization problems. In Proceedings
33rd Symposium on Foundations of Computer Science, IEEE CS Press, 1992.
[3] Jose L. Balcazar, Yang Dai, and Osamu Watanabe. A random sampling technique for training
support vector machines. In ALT. Springer, 2001.
[4] Jose L. Balcazar, Yang Dai, and Osamu Watanabe. Provably fast training algorithms for support vector machines. In ICDM, pages 43?50, 2001.
[5] K. P. Bennett and E. J. Bredensteiner. Duality and geometry in SVM classifiers. In P. Langley,
editor, ICML, pages 57?64, San Francisco, California, 2000.
[6] W. Johnson and J. Lindenstauss. Extensions of lipschitz maps into a hilbert space. Contemporary Mathematics, 1984.
[7] R. I. Arriaga and S. Vempala. An algorithmic theory of learning: Random concepts and random
projections. In Proceedings of the 40th Foundations of Computer Science, 1999.
[8] Kenneth L. Clarkson. Las vegas algorithms for linear and integer programming when the
dimension is small. Journal of the ACM, 42(2):488?499, 1995.
[9] B. Gartner and E. Welzl. A simple sampling lemma: analysis and application in geometric
optimization. In Proceedings of the 16th annual ACM symposium on Computational Geometry,
2000.
[10] M. Pellegrini. Randomizing combinatorial algorithms for linear programming when the dimension is moderately high. In SODA ?01, pages 101?108, Philadelphia, PA, USA, 2001.
[11] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. On kernels, margins and lowdimensional mappings. In Proc. of the 15th Conf. Algorithmic Learning Theory, 2004.
[12] T. Joachims. Training linear svms in linear time. In Proceedings of the ACM Conference on
Knowledge Discovery and Data Mining (KDD), 2006.
8
| 3352 |@word norm:3 termination:1 d2:1 covariance:2 pick:2 reduction:1 wrapper:1 contains:1 selecting:1 document:4 bhattacharyya:1 existing:1 current:1 com:1 written:1 kdd:1 plane:1 iterates:1 hyperplanes:2 unbounded:1 become:1 symposium:2 prove:2 consists:3 considering:2 solver:12 becomes:4 iisc:2 cardinality:1 bounded:3 project:1 developed:1 guarantee:2 every:1 classifier:3 scaled:2 k2:2 unit:2 before:3 positive:1 timing:1 subscript:3 black:1 bredensteiner:1 ringnorm:1 unique:1 practical:1 yj:1 practice:3 langley:1 projection:23 onto:6 close:3 selection:1 map:1 independently:2 convex:3 proving:1 notion:1 coordinate:2 target:1 programming:4 us:1 pa:1 element:1 satisfying:3 predicts:1 observed:1 solved:7 capture:1 worst:1 calculate:1 rij:2 ordering:1 decrease:1 contemporary:1 mentioned:2 moderately:1 solving:9 upon:4 completely:1 basis:1 fast:1 whose:2 supplementary:2 solve:3 larger:1 distortion:2 otherwise:1 ability:1 indication:1 lowdimensional:1 product:4 combining:1 rapidly:1 kv:1 convergence:1 extending:1 executing:1 tions:1 develop:1 x0i:10 solves:2 dividing:2 c:1 come:2 indicate:1 hull:3 material:2 violates:2 balcazar:2 require:3 generalization:1 really:1 extension:9 svmperf:3 hold:10 around:1 considered:1 normal:3 pellegrini:1 mapping:2 algorithmic:2 claim:2 proc:1 applicable:2 label:1 combinatorial:7 vice:1 clearly:1 gaussian:7 ck:1 corollary:5 l0:4 derived:1 joachim:1 properly:2 maria:1 indicates:3 baseline:1 vk2:2 provably:1 classification:10 dual:2 subexponential:1 logn:2 art:3 ernet:2 apriori:1 equal:2 santosh:1 nicely:1 barring:1 sampling:4 identical:1 look:2 icml:1 future:1 report:1 others:1 bangalore:3 few:2 randomly:1 preserve:2 geometry:2 consisting:2 mining:1 evaluation:1 bracket:1 kvk:1 primal:1 capable:1 orthogonal:9 euclidean:3 minimal:1 instance:3 column:1 svmlin:3 zn:1 subset:10 entry:3 uniform:1 johnson:1 too:3 randomizing:1 sv:20 kxi:1 synthetic:5 combined:1 sv1:1 fundamental:1 randomized:12 twonorm:1 quickly:1 w1:3 l22:2 conf:1 return:2 checkerboard:1 wk:3 automation:2 pone:1 depends:2 later:1 h1:3 performed:2 doing:1 sandwiching:1 start:2 aximize:2 narrowed:1 contribution:5 hariharan:1 accuracy:4 variance:1 efficiently:1 ccat:3 l21:2 definition:4 associated:1 proof:5 degeneracy:1 sampled:1 dataset:19 proved:1 knowledge:1 hilbert:1 actually:1 back:1 appears:1 higher:1 day:2 specify:1 formulation:9 done:4 though:1 box:1 furthermore:1 just:1 nonlinear:1 overlapping:1 usa:1 normalized:1 concept:1 hence:4 wp:6 noted:1 generalized:3 performs:1 l1:1 balcan:1 vega:1 x0t:1 slight:1 versa:1 rd:12 tuning:2 grid:2 mathematics:1 hp:1 similarly:1 dot:4 access:1 f0:1 multivariate:3 apart:1 life:1 yi:21 seen:3 minimum:2 greater:4 dai:2 c11:3 determine:3 u0:6 full:1 violate:1 x10:4 d0:1 faster:1 determination:2 calculation:1 icdm:1 florina:1 iteration:1 kernel:19 preserved:11 else:1 subject:4 induced:1 call:1 integer:1 near:2 yang:2 svmlight:4 split:2 enough:1 krishnan:1 xj:2 finish:1 zi:3 idea:7 bottleneck:1 motivated:1 handled:1 clarkson:1 reformulated:2 york:1 svs:2 generally:1 iterating:1 clear:5 useful:2 svms:1 category:2 reduced:2 sign:1 disjoint:1 dently:1 key:2 blum:1 deleted:1 drawn:2 changing:1 libsvm:6 kuk:1 kenneth:1 fraction:1 run:1 jose:2 soda:1 almost:10 x0n:1 appendix:7 investigates:1 comparable:1 bound:5 guaranteed:1 oracle:1 annual:1 adapted:1 constraint:3 x2:1 flat:2 optimality:3 min:1 performing:1 separable:24 rcv1:1 vempala:2 conjecture:1 department:2 according:1 smaller:3 terminates:1 lp:8 modification:1 happens:1 projecting:2 taken:1 chiranjib:1 equation:1 gartner:2 discus:3 inimize:2 end:2 available:1 apply:1 original:5 running:1 prof:2 objective:1 added:2 rt:1 subspace:8 distance:4 separating:2 w0:4 reason:1 length:1 negative:1 xk0:4 unknown:1 upper:2 datasets:12 ramesh:2 finite:2 supporting:1 complement:1 bernd:1 required:6 z1:1 california:1 nu:1 proceeds:1 usually:2 ku0:2 max:1 memory:1 aop:11 philadelphia:1 isn:1 genomics:1 review:1 geometric:1 l2:20 removal:1 discovery:1 determining:2 loss:1 proportional:1 triple:1 h2:3 foundation:2 x01:1 affine:1 editor:1 last:1 free:1 institute:2 dimension:13 xn:2 world:2 author:3 made:2 projected:7 san:1 far:1 overcomes:1 sv2:3 kkt:1 assumed:1 francisco:1 xi:28 search:1 sk:1 table:4 terminate:1 ku:6 nature:1 csa:2 separator:2 did:1 main:4 linearly:11 terminated:1 allowed:1 x1:3 biggest:1 watanabe:2 lie:5 down:1 theorem:18 rk:2 removing:2 svm:32 alt:1 exists:3 vapnik:1 avrim:1 execution:5 margin:30 arriaga:1 strand:1 applies:2 springer:2 determines:1 violator:5 acm:3 chiru:1 identity:1 labelled:3 lipschitz:1 bennett:1 change:4 determined:1 hyperplane:12 wt:1 lemma:13 called:4 total:1 duality:1 experimental:1 osamu:2 la:1 support:23 indian:2 violated:2 d1:1 |
2,595 | 3,353 | Consistent Minimization of Clustering Objective
Functions
Ulrike von Luxburg
Max Planck Institute for Biological Cybernetics
S?ebastien Bubeck
INRIA Futurs Lille, France
ulrike.luxburg@tuebingen.mpg.de
sebastien.bubeck@inria.fr
Stefanie Jegelka
Max Planck Institute for Biological Cybernetics
Michael Kaufmann
University of T?ubingen, Germany
stefanie.jegelka@tuebingen.mpg.de
mk@informatik.uni-tuebingen.de
Abstract
Clustering is often formulated as a discrete optimization problem. The objective is
to find, among all partitions of the data set, the best one according to some quality
measure. However, in the statistical setting where we assume that the finite data
set has been sampled from some underlying space, the goal is not to find the best
partition of the given sample, but to approximate the true partition of the underlying space. We argue that the discrete optimization approach usually does not
achieve this goal. As an alternative, we suggest the paradigm of ?nearest neighbor
clustering?. Instead of selecting the best out of all partitions of the sample, it only
considers partitions in some restricted function class. Using tools from statistical
learning theory we prove that nearest neighbor clustering is statistically consistent. Moreover, its worst case complexity is polynomial by construction, and it
can be implemented with small average case complexity using branch and bound.
1
Introduction
Clustering is the problem of discovering ?meaningful? groups in given data. Many algorithms try to
achieve this by minimizing a certain quality function Qn , for example graph cut objective functions
such as ratio cut or normalized cut, or various criteria based on some function of the within- and
between-cluster similarities. The objective of clustering is then stated as a discrete optimization
problem. Given a data set Xn = {X1 , . . . , Xn } and a clustering quality function Qn , the ideal
clustering algorithm should take into account all possible partitions of the data set and output the
one that minimizes Qn . The implicit understanding is that the ?best? clustering can be any partition
out of the set of all possible partitions of the data set. The algorithmic challenge is to construct an
algorithm which is able to find this clustering. We will call this approach the ?discrete optimization
approach to clustering?.
If we look at clustering from the perspective of statistical learning theory we assume that the finite
data set has been sampled from an underlying data space X according to some probability measure.
The ultimate goal in this setting is not to discover the best possible partition of the data set Xn , but
to learn the ?true clustering? of the underlying space. In an approach based on quality functions,
this ?true clustering? can be defined easily. We choose a clustering quality function Q on the set of
partitions of the entire data space X , and define the true clustering f ? to be the partition minimizing
Q. In this setting, a very important property of a clustering algorithm is consistency. Denoting the
clustering constructed on the finite sample by fn , we require that Q(fn ) converges to Q(f ? ) when
n??. The most important insight of statistical learning theory is that in order to be consistent,
learning algorithms have to choose their functions from some ?small? function space only. To measure the size of a function space F one uses the quantity NF (x1 , .., xn ) which denotes the number
1
of ways in which the points x1 , . . . , xn can be partitioned by functions in F. One can prove that
in the standard setting of statistical learning theory, a necessary condition for consistency is that
E log NF (x1 , .., xn )/n ? 0 (cf. Theorem 2.3 in Vapnik, 1995, Section 12.4 of Devroye et al., 1996).
Stated like this, it becomes apparent that the two viewpoints described above are not compatible
with each other. While the discrete optimization approach on any given sample attempts to find
the best of all (exponentially many) partitions, the statistical learning theory approach restricts the
set of candidate partitions to have sub-exponential size. Hence, from the statistical learning theory
perspective, an algorithm which is considered ideal in the discrete optimization setting is likely to
overfit. One can construct simple examples (cf. Bubeck and von Luxburg, 2007) which show that
this indeed can happen: here the partitions constructed on the finite sample do not converge to the
true clustering of the data space. In practice, for most cases the discrete optimization approach
cannot be performed perfectly as the corresponding optimization problem is NP hard. Instead,
people resort to heuristics. One approach is to use local optimization procedures potentially ending
in local minima only (this is what happens in the k-means algorithm). Another approach is to
construct a relaxation of the original problem which can be solved efficiently (spectral clustering is
an example for this). In both cases, one usually cannot guarantee how close the heuristic solution
is to the global finite sample optimum. This situation is clearly unsatisfactory: for most clustering
algorithms, we neither have guarantees on the finite sample behavior of the algorithm, nor on its
statistical consistency in the limit.
The following alternative approach looks much more promising. Instead of attempting to solve the
discrete optimization problem over the set of all partitions, and then resorting to relaxations due to
the NP-hardness of this problem, we turn the tables. Directly from the outset, we only consider
candidate partitions in some restricted class Fn containing only polynomially many functions. Then
the discrete optimization problem of minimizing Qn over Fn is no longer NP hard ? it can trivially
be solved in polynomially many steps by trying all candidates in Fn . From a theoretical point of
view this approach has the advantage that the resulting clustering algorithm has the potential of
being consistent. In addition, it also leads to practical benefits: rather than dealing with uncontrolled
relaxations of the original problem, we restrict the function class to some small enough subset Fn of
?reasonable? partitions. Within this subset, we then have complete control over the solution of the
optimization problem and can find the global optimum. Put another way, one can also interpret this
approach as some controlled way of sparsifying the NP hard optimization problem, with the positive
side effect of obeying the rules of statistical learning theory.
2
Nearest neighbor clustering
In the following we assume that we are given a set of data points Xn = {X1 , . . . , Xn } and pairwise
distances dij = d(Xi , Xj ) or pairwise similarities sij = s(Xi , Xj ). Let Qn be the finite sample
quality function to optimize on the sample. To follow the approach outlined above we have to
optimize Qn over a ?small? set Fn of partitions of Xn . Essentially, we have three requirements
on Fn : First, the number of functions in Fn should be at most polynomial in n. Second, in the
limit of n ? ? the class Fn should be rich enough to approximate any measurable partition of
the underlying space. Third, in order to perform the optimization we need to be able to enumerate
all members of this class, that is the function class Fn should be ?constructive? in some sense. A
convenient choice satisfying all those properties is the class of ?nearest neighbor partitions?. This
class contains all functions which can be generated as follows. Fix a subset of m n ?seed points?
Xs1 , . . . , Xsm among the given data points. Assign all other data points to their closest seed points,
that is for all j = 1, . . . , m define the set Zj as the subset of data points whose nearest seed point is
Xsj . Then consider all partitions of Xn which are constant on the sets Zj . More formally, for given
seeds we define the set Fn as the set of all functions f : X ? {1, . . . , K} which are constant on the
cells of the Voronoi partition induced by the seeds. Here K denotes the number of clusters we want
to construct. The function class Fn contains K m functions, which is polynomial in n if the number
m of seeds satisfies m = O(log n). Given Fn , the simplest polynomial-time optimization algorithm
is then to evaluate Qn (f ) for all f ? Fn and choose the solution fn = argminf ?Fn Qn (f ). We call
the resulting clustering the nearest neighbor clustering and denote it by NNC(Qn ). In practice, the
seeds will be chosen randomly among the given data points.
2
3
Consistency of nearest neighbor clustering
In this section we prove that nearest neighbor clustering is statistically consistent for many clustering
quality functions. Due to the complexity of the proofs and the page restriction we can only present
sketches of the proofs. All details can be found in von Luxburg et al. (2007). Let us start with some
notation. For any clustering function f : Rd ? {1, . . . , K} we denote by the predicate A(f ) a
property of the function which can either be true or false. As an example, define A(f ) to be true if
all clusters have at least a certain minimal size. Moreover, we need to introduce a predicate An (f )
which will be an ?estimator? of A(f ) based on the finite sample only. Let m := m(n) ? n be the
number of seeds used in nearest neighbor clustering. To simplify notation we assume in this section
that the seeds are the first m data points; all results remain valid for any other (even random) choice
of seeds. As data space we use X = Rd . We define:
NNm (x) := NNm(n) (x) := argminy?{X1 ,...,Xm } kx ? yk ( for x ? Rd )
F := {f : Rd ? {1, . . . , K} | f continuous P-a.e. and A(f ) true}
Fn := FX1 ,...,Xn := {f : Rd ? {1, . . . , K} | f satisfies f (x) = f (NNm (x)), and An (f ) is true}
S
Fen := X1 ,...,Xn ?Rd FX1 ,...,Xn
Furthermore, let Q : F ? R be the quality function we aim to minimize, and Qn : Fn ? R an
estimator of this quality function on a finite sample. With this notation, the true clustering f ? on the
underlying space and the nearest neighbor clustering fn introduced in the last section are given by
f ? ? argminf ?F Q(f )
fn ? argminf ?Fn Qn (f ).
and
Later on we will also need to work with the functions
fn? ? argminf ?Fn Q(f )
fe? (x) := f ? (NNm (x)).
and
As distance function between different clusterings f, g we will use
Ln (f, g) := P(f (X) 6= g(X) | X1 , . . . , Xn )
(we need the conditioning in case f or g depend on the data, it has no effect otherwise).
Theorem 1 (Consistency of nearest neighbor clustering) Let (Xi )i?N be a sequence of points
drawn i.i.d. according to some probability measure P on Rd , and m := m(n) the number of
seed points used in nearest neighbor clustering. Let Q : F ? R be a clustering quality function,
Qn : Fen ? R its estimator, and A(f ) and An (f ) some predicates. Assume that:
1. Qn (f ) is a consistent estimator of Q(f ) which converges sufficiently fast:
?? > 0, K m (2n)(d+1)m supf ?Fen P(|Qn (f ) ? Q(f )| > ?) ? 0.
2
2. An (f ) is an estimator of A(f ) which is ?consistent? in the following way:
P(An (fe? ) true) ? 1
and
P(A(fn ) true) ? 1.
3. Q is uniformly continuous with respect to the distance Ln between F and Fn :
?? > 0 ??(?) > 0 ?f ? F ?g ? Fn :
Ln (f, g) ? ?(?) =? |Q(f ) ? Q(g)| ? ?.
4. limn?? m(n) = +?.
Then nearest neighbor clustering as introduced in Section 2 is weakly consistent, that is Q(fn ) ?
Q(f ? ) in probability.
Proof. (Sketch, for details see von Luxburg et al. (2007)). We split the term P(|Q(fn )?Q(f ? )| ? ?)
into its two sides P(Q(fn ) ? Q(f ? ) ? ??) and P(Q(fn ) ? Q(f ? ) ? ?). It is a straightforward
consequence of Condition (2) that the first term converges to 0. The main work consists in bounding
the second term. As usual we consider the estimation and approximation errors
P Q(fn ) ? Q(f ? ) ? ? ? P Q(fn ) ? Q(fn? ) ? ?/2 + P Q(fn? ) ? Q(f ? ) ? ?/2 .
3
First we bound the estimation error. In a few lines one can show that
P(Q(fn ) ? Q(fn? ) ? ?/2) ? P(supf ?Fn |Qn (f ) ? Q(f )| ? ?/4).
Note that even though the right hand side resembles the standard quantities often considered in
statistical learning theory, it is not straightforward to bound as we do not assume that Q(f ) =
EQn (f ). Moreover, note that the function class Fn is data dependent as the seed points used in
the Voronoi partition are data points. To circumvent this problem, we replace the function class Fn
by the larger class Fen , which is not data dependent. Using symmetrization by a ghost sample (cf.
Section 12.3 of Devroye et al., 1996), we then move the supremum out of the probability:
supf ?Fen P |Qn (f ) ? Q(f )| ? ?/16
e
P sup |Qn (f ) ? Q(f )| ? ?/4 ? 2SK (Fn , 2n)
(1)
inf f ?Fen P |Qn (f ) ? Q(f )| ? ?/8
f ?Fn
Note that the unusual denominator in Eq. (1) emerges in the symmetrization step as we do not
assume Q(f ) = EQn (f ). The quantity SK (Fen , 2n) denotes the shattering coefficient, that is the
maximum number of ways that 2n points can be partitioned into K sets using the functions in Fen .
It is well known (e.g., Section 21.5 of Devroye et al., 1996) that the number of Voronoi partitions
2
of n points using m cells in Rd is bounded by n(d+1)m , hence the number of nearest neighbor
2
clusterings into K classes is bounded by SK (Fen , n) ? K m n(d+1)m . Under Condition (1) of the
Theorem we now see that for fixed ? and n ? ? the right hand side of (1) converges to 0. Thus the
same holds for the estimation error. To deal with the approximation error, observe that if An (fe? ) is
true, then fe? ? Fn , and by the definition of fn? we have
Q(fn? ) ? Q(f ? ) ? Q(fe? ) ? Q(f ? ) and thus
P Q(fn? ) ? Q(f ? ) ? ? ? P(An (fe? ) false) + P fe? ? Fn and Q(fe? ) ? Q(f ? ) ? ? . (2)
The first expression on the right hand side converges to 0 by Condition (2) in the theorem. Using
Condition (3), we can bound the second expression in terms of the distance Ln to obtain
P fe? ? Fn , Q(fe? ) ? Q(f ? ) ? ? ? P Q(fe? ) ? Q(f ? ) ? ? ? P Ln (f ? , fe? ) ? ?(?) .
Now we use techniques from Fritz (1975) to show that if n is large enough, then the distance between
a function f ? F evaluated at x and the same function evaluated at NNm (x) is small. Namely, for
any f ? F and any ? > 0 there exists some b(?(?)) > 0 which does not depend on n and f such that
P(Ln (f, f (NNm (?))) > ?(?)) ? (2/?(?)) e?mb(?(?)) .
The quantity ?(?) has been introduced in Condition (3). For every fixed ?, this term converges to 0
due to Condition (4), thus the approximation error vanishes.
,
Now we want to apply our general theorem to particular objective functions. We start with the
normalized cut. Let s : Rd ? Rd ? R+ be a similarity function which is upper bounded by a constant
C. For a clustering f : Rd ? {1, . . . , K} denote by fk (x) := 1f (x)=k the indicator function of the
k-th cluster. Define the empirical and true cut, volume, and normalized cut as follows:
Pn
1
cutn (fk ) := n(n?1)
i,j=1 fk (Xi )(1 ? fk (Xj ))s(Xi , Xj )
cut(fk ) := EX,Y fk (X)(1 ? fk (Y ))s(X, Y )
Pn
1
voln (fk ) := n(n?1)
vol(fk ) := EX,Y fk (X)s(X, Y )
i,j=1 fk (Xi )s(Xi , Xj )
PK
PK
n (fk )
k)
Ncutn (f ) := k=1 cut
Ncut(f ) := k=1 cut(f
voln (fk )
vol(fk )
Note that E Ncutn (f ) 6= Ncut(f ), but E cutn (f ) = cut(f ) and E voln (f ) = vol(f ). We fix a
constant a > 0, a sequence (an )n?N with an ? an+1 and an ? a and define the predicates
A(f ) is true : ?? vol(fk ) > a ?k = 1, . . . , K
An (f ) is true : ?? voln (fk ) > an ?k = 1, . . . , K
(3)
Theorem 2 (Consistency of NNC(Ncutn )) Let (Xi )i?N be a sequence of points drawn i.i.d. according to some probability measure P on Rd and s : Rd ? Rd ? R+ be a similarity function which
is upper bounded by a constant C. Let m := m(n) be the number of seed points used in nearest
neighbor clustering, a > 0 an arbitrary constant, and (an )n?N a monotonically decreasing sequence with an ? a. Then nearest neighbor clustering using Q := Ncut, Qn := Ncutn , and A
and An as defined in (3) is weakly consistent if m(n) ? ? and m2 log n/(n(a ? an )2 ) ? 0.
4
Proof. We will check that all conditions of Theorem 1 are satisfied. First we establish that
{| cutn (fk ) ? cut(fk )| ? a?} ? {| voln (fk ) ? vol(fk )| ? a?} ? {|
cutn (fk ) cut(fk )
?
| ? 2?}.
voln (fk )
vol(fk )
Applying the McDiarmid inequality to cutn and voln , respectively, we obtain that for all f ? Fen
na2 ?2
P(| Ncut(f ) ? Ncutn (f )| > ?) ? 4K exp ? 2 2 .
8C K
Together with m2 log n/(n(a ? an )2 ) ? 0 this shows Condition (1) of Theorem 1. The proof of
Condition (2) is rather technical, but in the end also follows by applying the McDiarmid inequality
to voln (f ). Condition (3) follows by establishing that for f ? F and g ? Fn we have
| Ncut(f ) ? Ncut(g)| ?
4CK
Ln (f, g).
a
,
In fact, Theorem 1 can be applied to a large variety of clustering objective functions. As examples,
consider ratio cut, within-sum of squares, and the ratio of between- and within-cluster similarity:
PK
PK
k)
RatioCut(f ) := k=1 Ecut(f
RatioCutn (f ) := k=1 cutnnk(fk )
fk (X)
Pn PK
PK
WSSn (f ) := n1 i=1 k=1 fk (Xi )kXi ? ck,n k2 WSS(f ) := E k=1 fk (X)kX ? ck k2
PK
PK
n (fk )
k)
BW := k=1 vol(fcut(f
BWn := k=1 voln (fcut
k )?cutn (fk )
k )?cut(fk )
P
Here
nk :=
:=
i fk (Xi )/n is the fraction of points in the k-th cluster, and ck,n
P
f
(X
)X
/(nn
)
and
c
:=
E
f
(X)X/
E
f
(X)
are
the
empirical
and
true
cluster
centers.
k
i
i
k
k
k
k
i
Theorem 3 (Consistency of NNC(RatioCutn ), NNC(WSSn ), and NNC(BWn )) Let fn
and f ? be the empirical and true minimizers of nearest neighbor clustering using RatioCutn ,
WSSn , or BWn , respectively. Then, under conditions similar to the ones in Theorem 2, we have
RatioCut(fn ) ? RatioCut(f ? ), WSS(fn ) ? WSS(f ? ), and BW(fn ) ? BW(f ? ) in probability. See von Luxburg et al. (2007) for details.
4
Implementation using branch and bound
It is an obvious question how nearest neighbor clustering can be implemented in a more efficient way
than simply trying all functions in Fn . Promising candidates are branch and bound methods. They
are guaranteed to achieve an optimal solution, but in most cases are much more efficient than a naive
implementation. As an example we introduce a branch and bound algorithm for solving NNC(Ncut)
for K = 2 clusters. For background reading see Brusco and Stahl (2005). First of all, observe that
minimizing Ncutn over the nearest neighbor function set Fn is the same as minimizing Ncutm over
all partitions of a contracted data set consisting of m ?super-points? Z1 , . . . , Zm (super-point Zi
contains all data
P points assigned to the i-th seed point), endowed with the ?super-similarity? function
s?(Zs , Zt ) := Xi ?Zs ,Xj ?Zt s(Xi , Xj ). Hence nearest neighbor clustering on the original data set
with n points can be performed by directly optimizing Ncut on the contracted data set consisting of
only m super-points. Assume we already determined the labels l1 , . . . , li?1 ? {?1} of the first i?1
super-points. For those points we introduce the sets A = {Z1 , . . . , Zi?1 }, A? := {Zj | j < i, lj =
?1}, A+ := {Zj | j < i, lj = +1}, for the remaining points the set B = {Zi , . . . , Zm }, and the
set V := A ? B of all points. By default we label all points in B with ?1 and, in recursion level
i, decide about moving Zi to cluster +1. Analogously to the notation fk of the previous section, in
case K = 2 we can decompose Ncut(f ) = cut(f+1 ) ? (1/ vol(f+1 ) + 1/ vol(f?1 )); we call the first
term the ?cut term? and the second term the ?volume term?. As it is standard in branch and bound
we have to investigate whether the ?branch? of clusterings with the specific fixed labels on A could
contain a solution which is better than all the previously considered solutions. We use two criteria
for this purpose. The first one is very simple: assigning at least one point in B to +1 can only
lead to an improvement if this either decreases the cut term or the volume term of Ncut. Necessary
conditions for this are maxj?i s?(Zj , A+ ) ? s?(Zj , A? ) ? 0 or vol(A+ ) ? vol(V )/2, respectively.
If neither is satisfied, we retract. The second criterion involves a lower bound ?l on the Ncut value of
5
? i, f, ?u ){
Branch and bound algorithm for Ncut: f ? = bbncut(S,
1. Set g := f ; set A? , A+ , and B as described in the text
2. // Deal with special cases:
? If i = m and A? = ? then return f .
? If i = m and A? 6= ?:
? Set gi = +1.
? If Ncut(g) < Ncut(f ) return g, else return f .
3. // Pruning:
? If vol(A+ ) > vol(A ? B)/2 and maxj?i (?
s(j, A+ ) ? s?(j, A? )) ? 0 return f .
? Compute lower bound ?l as described in the text.
? If ?l ? ?u then return f .
4. // If no pruning possible, recursively call bbncut:
? g, i + 1, ?u0 )
? Set gi = +1, ?u0 := min{Ncut(g), ?u }, call g 0 := bbncut(S,
00
0
0
00
? g, i + 1, ?u00 )
? Set gi = ?1, ?u := min{Ncut(g ), ?u }, call g := bbncut(S,
0
00
0
00
? If Ncut(g ) ? Ncut(g ) then return g , else return g .
}
Figure 1: Branch and bound algorithm for NNC(Ncut) for K = 2. The algorithm is initially called
? i = 2, f = (+1, ?1, . . . , ?1), and ?u the Ncut value of f .
with the super-similarity matrix S,
all solutions in the current branch. It compares ?l to an upper bound ?u on the optimal Ncut value,
namely to the Ncut value of the best function we have seen so far. If ?l ? ?u then no improvement
is possible by any clustering in the current branch of the tree, and we retract. To compute ?l , assume
we assign a non-empty set B + ? P
B to label +1 and the remaining set B ? = B \ B + to label -1.
Using the conventions s?(A, B) = Zi ?A,Zj ?B s?ij and s?(A, ?) = 0, the cut term is bounded by
minj?i s(Zj , A+ )
if A? = ?
cut(A+ ? B + , A? ? B ? ) ?
(4)
+
?
?
s?(A , A ) + minj?i s?(Zj , A ) otherwise.
The volume term can be maximally decreased in case vol(A+ ) < vol(V )/2, when choosing B +
such that vol(A+ ? B + ) = vol(A? ? B ? ) = vol(V )/2. If vol(A+ ) > vol(V )/2, then an increase
of the volume term is unavoidable; this increase is minimal when we move one vertex only to A+ :
(
4/ vol(V )
if vol(A+ ) ? vol(V )/2
1
1
+
?
(5)
+
vol(A+ ?B + )
vol(A? ?B ? )
vol(V )/max (vol(A ? Zj ) vol(A? ? B \ Zj )) otherw.
j?i
Combining both bounds we can now define the lower bound ?l as the product of Eq. (4) and (5).
The entire algorithm is presented in Fig. 1. On top of the basic algorithm one can apply various
heuristics to improve the retraction behavior and thus the average running time of the algorithm. For
example, in our experience it is of advantage to sort the super-points by decreasing degree, and from
one recursion level to the next one alternate between first visiting branch gi = 1 and gi = ?1.
5
Experiments
The main point about nearest neighbor clustering is its statistical consistency: for large n it reveals
an approximately correct clustering. In this section we want to show that it also behaves reasonably on smaller samples. Given an objective function Qn (such as WSS or Ncut) we compare the
NNC results to heuristics designed to optimize Qn directly (such as k-means or spectral clustering).
As numeric data sets we used classification benchmark data sets from different repositories (UCI
repository, repository by G. R?atsch) and microarray data from Spellman et al. (1998). Moreover, we
use graph data sets of the internet graph and of biological, social, and political networks: COSIN
collection, collection by M. Newman, email data by Guimer`a et al. (2003), electrical power network
by Watts and Strogatz (1998), and protein interaction networks of Jeong et al. (2001) and Tsuda
et al. (2005). Due to space constraints we focus on the case of constructing K = 2 clusters using
the objective functions WSS and Ncut. We always set the number m of seed points for NNC to
m = log n. In case of WSS, we compare the result of the k-means algorithm to the result of NNC
using the WSS objective function and the Euclidean distance to assign data points to seed points.
6
Numeric
data sets
breast-c.
diabetis
german
heart
splice
bcw
ionosph.
pima
cellcycle
WSS
K-means
NNC
6.95 ? 0.19
7.04 ? 0.21
7.12 ? 0.20
7.12 ? 0.22
6.62 ? 0.22
6.71 ? 0.22
6.72 ? 0.22
6.72 ? 0.22
18.26 ? 0.27 18.56 ? 0.28
18.35 ? 0.30 18.45 ? 0.32
10.65 ? 0.46 10.77 ? 0.47
10.75 ? 0.46 10.74 ? 0.46
68.99 ? 0.24 69.89 ? 0.24
69.03 ? 0.24 69.18 ? 0.25
3.97 ? 0.26
3.98 ? 0.26
3.98 ? 0.26
3.98 ? 0.26
25.72 ? 1.63 25.77 ? 1.63
25.76 ? 1.63 25.77 ? 1.63
6.62 ? 0.22
6.73 ? 0.23
6.73 ? 0.23
6.73 ? 0.23
0.78 ? 0.03
0.78 ? 0.03
0.78 ? 0.03
0.78 ? 0.02
Ncut
SC
NNC
0.11 ? 0.02 0.09 ? 0.02
0.22 ? 0.07 0.21 ? 0.07
0.03 ? 0.02 0.03 ? 0.02
0.04 ? 0.03 0.05 ? 0.05
0.02 ? 0.02 0.02 ? 0.02
0.04 ? 0.08 0.03 ? 0.03
0.18 ? 0.03 0.17 ? 0.02
0.28 ? 0.03 0.30 ? 0.07
0.36 ? 0.10 0.44 ? 0.16
0.58 ? 0.09 0.66 ? 0.18
0.02 ? 0.01 0.02 ? 0.01
0.04 ? 0.01 0.08 ? 0.07
0.06 ? 0.03 0.04 ? 0.01
0.12 ? 0.11 0.14 ? 0.12
0.03 ? 0.03 0.03 ? 0.03
0.05 ? 0.04 0.09 ? 0.13
0.12 ? 0.02 0.10 ? 0.01
0.16 ? 0.02 0.15 ? 0.03
Network data
ecoli.interact
ecoli.metabol
helico
beta3s
AS-19971108
AS-19980402
AS-19980703
AS-19981002
AS-19990114
AS-19990402
netscience
polblogs
power
email
yeastProtInt
protNW1
protNW2
protNW3
protNW4
NNC
0.06
0.03
0.16
0.00
0.02
0.01
0.02
0.04
0.08
0.11
0.01
0.11
0.00
0.27
0.04
0.00
0.08
0.01
0.03
SC
0.06
0.04
0.16
0.00
0.02
1.00
0.02
0.04
0.05
0.10
0.01
0.11
0.00
0.27
0.06
0.00
1.00
0.80
0.76
Table 1: Left: Numeric data. Results for K-means algorithm, NNC(WSS) with Euclidean distance;
spectral clustering (SC); NNC(Ncut) with commute distance. The top line always shows the results
on the training set, the second line the extended results on the test set. Right: Network data.
NNC(Ncut) with commute distance and spectral clustering, both trained on the entire graph.
Note that one cannot run K-means on pure network data, which does not provide coordinates. In
case of Ncut, we use the Gaussian kernel as similarity function on the numeric data sets. The kernel
width ? is set to the mean distance of a data point to its k-th nearest neighbor. We then build the
k-nearest neighbor graph (both times using k = ln n). On the network data, we directly use the given
graph. For both types of data, we use the commute distance on the graph (e.g., Gutman and Xiao,
2004) as distance function to determine the nearest seed points for NNC.
In the first experiment we compare the values obtained by the different algorithms on the training
sets. From the numeric data sets we generated z = 40 training sets by subsampling n/2 points.
On each training set, we repeated all algorithms r = 50 times with different random initializations
(the seeds in NNC; the centers in K-means; the centers in the K-means post-processing step in
spectral clustering). Denoting the quality of an individual run of the algorithm by q, we then report
the values meanz (minr q) ? standarddevz (minr q). For the network data sets we ran spectral
clustering and NNC on the whole graph. Again we use r = 50 different initializations, and we report
minr q. All results can be found in Table 1. For both the numeric data sets (left table, top lines) and
the network data sets (right table) we see that the training performance of NNC is comparable to the
other algorithms. This is what we had hoped, and we find it remarkable as NNC is in fact a very
simple clustering algorithm.
In the second experiment we try to measure the amount of overfitting induced by the different algorithms. For each of the numeric data sets we cluster n/2 points, extend the clustering to the other
n/2 points, and then compute the objective function on the test set. For the extensions we proceed
in a greedy way: for each test point, we add this test point to the training set and then give it the
label +1 or -1 that leads to the smaller quality value on the augmented training set. We also tried
several other extensions suggested in the literature, but the results did not differ much. To compute
the test error, we then evaluate the quality function on the test set labeled according to the extension. For Ncut, we do this based on the k-nearest neighbor graph on the test set only. Note that this
experiment does not make sense on the network data, as there is no default procedure to construct
the subgraphs for training and testing. The results on the numeric data sets are reported in Table 1
(left table, bottom lines). We see that NNC performs roughly comparably to the other algorithms.
This is not really what we wanted to obtain, our hope was that NNC obtains better test values as it is
less prone to overfitting. The most likely explanation is that both K-means and spectral clustering
have already reasonably good extension properties. This can be due to the fact that as NNC, both
algorithms consider only a certain subclass of all partitions: Voronoi partitions for K-means, and
partitions induced by eigenvectors for spectral clustering. See below for more discussion.
7
6
Discussion
In this paper we investigate clustering algorithms which minimize quality functions. Our main point
is that, as soon as we require statistical consistency, we have to work with ?small? function classes
Fn . If we even choose Fn to be polynomial, then all problems due to NP hardness of discrete optimization problems formally disappear as the remaining optimization problems become inherently
polynomial. From a practical point of view, the approach of using a restricted function class Fn can
be seen as a more controlled way of simplifying NP hard optimization problems than the standard
approaches of local optimization or relaxation. Carefully choosing the function class Fn such that
overly complex target functions are excluded, we can guarantee to pick the best out of all remaining
target functions. This strategy circumvents the problem that solutions of local optimization or relaxation heuristics can be arbitrarily far away from the optimal solution.
The generic clustering algorithm we studied in this article is nearest neighbor clustering, which produces clusterings that are constant on small local neighborhoods. We have proved that this algorithm
is statistically consistent for a large variety of popular clustering objective functions. Thus, as opposed to other clustering algorithms such as the K-means algorithm or spectral clustering, nearest
neighbor clustering is guaranteed to converge to a minimizer of the true global optimum on the
underlying space. This statement is much stronger than the results already known for K-means or
spectral clustering. For K-means it has been proved that the global minimizer of the WSS objective function on the sample converges to a global minimizer on the underlying space (e.g., Pollard,
1981). However, as the standard K-means algorithm only discovers a local optimum on the discrete
sample, this result does not apply to the algorithm used in practice. A related effect happens for
spectral clustering, which is a relaxation attempting to minimize Ncut (see von Luxburg (2007) for
a tutorial). It has been shown that under certain conditions the solution of the relaxed problem on
the finite sample converges to some limit clustering (e.g., von Luxburg et al., to appear). However,
it has been conjectured that this limit clustering is not necessarily the optimizer of the Ncut objective function. So for both cases, our consistency results represent an improvement: our algorithm
provably converges to the true limit minimizer of K-means or Ncut, respectively. The same result
also holds for a large number of alternative objective functions used for clustering.
References
M. Brusco and S. Stahl. Branch-and-Bound Applications in Combinatorial Data Analysis. Springer, 2005.
S. Bubeck and U. von Luxburg. Overfitting of clustering and how to avoid it. Preprint, 2007.
Data repository by G. R?atsch. http://ida.first.fraunhofer.de/projects/bench/benchmarks.htm.
Data repository by M. Newman. http://www-personal.umich.edu/?mejn/netdata/.
Data repository by UCI. http://www.ics.uci.edu/?mlearn/MLRepository.html.
Data repository COSIN. http://151.100.123.37/data.html.
L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996.
J. Fritz. Distribution-free exponential error bound for nearest neighbor pattern classification. IEEE Trans. Inf.
Th., 21(5):552 ? 557, 1975.
R. Guimer`a, L. Danon, A. D??az-Guilera, F. Giralt, and A. Arenas. Self-similar community structure in a network
of human interactions. Phys. Rev. E, 68(6):065103, 2003.
I. Gutman and W. Xiao. Generalized inverse of the Laplacian matrix and some applications. Bulletin de
l?Academie Serbe des Sciences at des Arts (Cl. Math. Natur.), 129:15 ? 23, 2004.
H. Jeong, S. Mason, A. Barabasi, and Z. Oltvai. Centrality and lethality of protein networks. Nature, 411:
41 ? 42, 2001.
D. Pollard. Strong consistency of k-means clustering. Annals of Statistics, 9(1):135 ? 140, 1981.
P. Spellman, G. Sherlock, M. Zhang, V. Iyer, M. Anders, M. Eisen, P. Brown, D. Botstein, and B. Futcher. Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray
hybridization. Mol Biol Cell, 9(12):3273?97, 1998.
K. Tsuda, H. Shin, and B. Sch?olkopf. Fast protein classification with multiple networks. Bioinformatics, 21
(Supplement 1):ii59 ? ii65, 2005.
V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4), 2007.
U. von Luxburg, S. Bubeck, S. Jegelka, and M. Kaufmann. Supplementary material to ?Consistent minimization
of clustering objective functions?, 2007. http://www.tuebingen.mpg.de/?ule.
U. von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. Annals of Statistics, to appear.
D. Watts and S. Strogatz. Collective dynamics of small world networks. Nature, 393:440?442, 1998.
8
| 3353 |@word repository:7 polynomial:6 stronger:1 tried:1 simplifying:1 commute:3 pick:1 recursively:1 contains:3 selecting:1 denoting:2 current:2 ida:1 assigning:1 fn:59 partition:28 happen:1 wanted:1 designed:1 greedy:1 discovering:1 fx1:2 math:1 mcdiarmid:2 zhang:1 constructed:2 become:1 prove:3 consists:1 introduce:3 pairwise:2 hardness:2 indeed:1 roughly:1 mpg:3 nor:1 behavior:2 decreasing:2 becomes:1 project:1 discover:1 underlying:8 moreover:4 notation:4 bounded:5 what:3 minimizes:1 z:2 guarantee:3 every:1 nf:2 subclass:1 k2:2 control:1 appear:2 planck:2 positive:1 local:6 limit:5 consequence:1 establishing:1 approximately:1 lugosi:1 inria:2 initialization:2 resembles:1 studied:1 statistically:3 practical:2 testing:1 practice:3 procedure:2 shin:1 empirical:3 orfi:1 convenient:1 outset:1 suggest:1 protein:3 cannot:3 close:1 put:1 applying:2 optimize:3 measurable:1 restriction:1 www:3 center:3 straightforward:2 pure:1 m2:2 insight:1 rule:1 estimator:5 subgraphs:1 ecut:1 coordinate:1 annals:2 construction:1 target:2 us:1 academie:1 satisfying:1 recognition:1 gutman:2 cut:19 labeled:1 bottom:1 preprint:1 solved:2 electrical:1 worst:1 cycle:1 decrease:1 yk:1 ran:1 vanishes:1 complexity:3 polblogs:1 dynamic:1 personal:1 trained:1 depend:2 weakly:2 solving:1 easily:1 htm:1 various:2 fast:2 fcut:2 sc:3 newman:2 choosing:2 neighborhood:1 apparent:1 heuristic:5 whose:1 solve:1 larger:1 supplementary:1 otherwise:2 statistic:3 gi:5 advantage:2 sequence:4 interaction:2 mb:1 product:1 fr:1 zm:2 uci:3 combining:1 achieve:3 az:1 olkopf:1 cluster:11 optimum:4 requirement:1 empty:1 produce:1 converges:9 ij:1 nearest:28 eq:2 strong:1 implemented:2 involves:1 convention:1 differ:1 correct:1 human:1 material:1 require:2 assign:3 fix:2 really:1 decompose:1 biological:3 extension:4 hold:2 sufficiently:1 considered:3 ic:1 exp:1 seed:18 algorithmic:1 netdata:1 optimizer:1 cutn:6 barabasi:1 purpose:1 estimation:3 label:6 combinatorial:1 symmetrization:2 tool:1 minimization:2 hope:1 clearly:1 gaussian:1 always:2 aim:1 super:7 rather:2 ck:4 pn:3 avoid:1 cerevisiae:1 focus:1 improvement:3 unsatisfactory:1 saccharomyces:1 check:1 political:1 sense:2 voronoi:4 dependent:2 minimizers:1 nn:1 anders:1 entire:3 lj:2 initially:1 w:10 france:1 germany:1 provably:1 among:3 classification:3 html:2 art:1 special:1 construct:5 ncutn:6 shattering:1 lille:1 look:2 np:6 report:2 simplify:1 few:1 belkin:1 randomly:1 comprehensive:1 individual:1 maxj:2 consisting:2 bw:3 n1:1 attempt:1 investigate:2 arena:1 necessary:2 experience:1 tree:1 euclidean:2 tsuda:2 theoretical:1 minimal:2 mk:1 minr:3 vertex:1 subset:4 predicate:4 dij:1 reported:1 bwn:3 kxi:1 fritz:2 probabilistic:1 michael:1 together:1 analogously:1 von:11 again:1 satisfied:2 unavoidable:1 containing:1 choose:4 opposed:1 resort:1 return:7 li:1 account:1 potential:1 de:8 gy:1 coefficient:1 performed:2 try:2 view:2 later:1 sup:1 ulrike:2 start:2 sort:1 minimize:3 square:1 kaufmann:2 efficiently:1 identification:1 comparably:1 informatik:1 ecoli:2 cybernetics:2 mlearn:1 minj:2 phys:1 retraction:1 email:2 definition:1 obvious:1 proof:5 sampled:2 proved:2 popular:1 mejn:1 emerges:1 nnm:6 carefully:1 follow:1 botstein:1 maximally:1 evaluated:2 though:1 furthermore:1 implicit:1 overfit:1 sketch:2 hand:3 eqn:2 quality:14 yeast:1 effect:3 normalized:3 true:20 contain:1 brown:1 hence:3 assigned:1 stahl:2 excluded:1 deal:2 width:1 self:1 mlrepository:1 criterion:3 generalized:1 trying:2 complete:1 performs:1 l1:1 discovers:1 argminy:1 behaves:1 conditioning:1 exponentially:1 volume:5 extend:1 interpret:1 rd:14 consistency:12 resorting:1 trivially:1 outlined:1 fk:33 had:1 moving:1 similarity:8 longer:1 add:1 closest:1 perspective:2 optimizing:1 inf:2 conjectured:1 certain:4 ubingen:1 inequality:2 arbitrarily:1 fen:10 seen:2 minimum:1 relaxed:1 converge:2 paradigm:1 determine:1 monotonically:1 u0:2 branch:12 multiple:1 technical:1 post:1 controlled:2 laplacian:1 xsm:1 basic:1 denominator:1 essentially:1 breast:1 kernel:2 represent:1 cell:4 addition:1 want:3 background:1 decreased:1 else:2 limn:1 microarray:2 sch:1 induced:3 hybridization:1 member:1 call:6 ideal:2 split:1 enough:3 variety:2 xj:7 xsj:1 zi:5 perfectly:1 restrict:1 cellcycle:1 whether:1 expression:2 ultimate:1 pollard:2 proceed:1 enumerate:1 eigenvectors:1 amount:1 simplest:1 http:5 restricts:1 zj:11 tutorial:2 overly:1 discrete:11 vol:28 group:1 sparsifying:1 drawn:2 neither:2 graph:9 relaxation:6 fraction:1 sum:1 luxburg:12 run:2 inverse:1 reasonable:1 decide:1 circumvents:1 cosin:2 comparable:1 bound:17 uncontrolled:1 internet:1 guaranteed:2 constraint:1 bousquet:1 min:2 attempting:2 according:5 alternate:1 watt:2 remain:1 smaller:2 partitioned:2 rev:1 happens:2 restricted:3 sij:1 heart:1 ln:8 previously:1 turn:1 german:1 end:1 unusual:1 umich:1 endowed:1 apply:3 observe:2 away:1 spectral:13 generic:1 centrality:1 alternative:3 original:3 denotes:3 clustering:76 cf:3 remaining:4 top:3 running:1 subsampling:1 build:1 establish:1 disappear:1 objective:15 move:2 question:1 quantity:4 already:3 strategy:1 usual:1 visiting:1 regulated:1 distance:12 argue:1 considers:1 tuebingen:4 devroye:4 ratio:3 minimizing:5 fe:12 potentially:1 pima:1 statement:1 argminf:4 stated:2 implementation:2 ebastien:1 zt:2 collective:1 sebastien:1 perform:1 upper:3 benchmark:2 finite:10 situation:1 extended:1 arbitrary:1 community:1 introduced:3 namely:2 z1:2 jeong:2 trans:1 able:2 suggested:1 usually:2 below:1 xm:1 pattern:2 ghost:1 reading:1 challenge:1 nnc:24 max:3 sherlock:1 explanation:1 power:2 circumvent:1 indicator:1 recursion:2 spellman:2 improve:1 fraunhofer:1 stefanie:2 naive:1 text:2 understanding:1 literature:1 netscience:1 remarkable:1 futcher:1 degree:1 jegelka:3 consistent:11 xiao:2 article:1 viewpoint:1 prone:1 compatible:1 last:1 soon:1 free:1 side:5 institute:2 neighbor:26 xs1:1 bulletin:1 voln:9 benefit:1 default:2 xn:14 ending:1 valid:1 rich:1 qn:21 numeric:8 eisen:1 collection:2 world:1 far:2 polynomially:2 social:1 approximate:2 pruning:2 uni:1 obtains:1 supremum:1 dealing:1 gene:1 global:5 overfitting:3 reveals:1 xi:12 continuous:2 sk:3 table:7 promising:2 learn:1 reasonably:2 nature:3 inherently:1 mol:1 interact:1 complex:1 necessarily:1 constructing:1 cl:1 did:1 pk:8 main:3 bounding:1 whole:1 oltvai:1 repeated:1 ule:1 x1:8 augmented:1 contracted:2 fig:1 sub:1 obeying:1 exponential:2 candidate:4 third:1 splice:1 theorem:11 specific:1 mason:1 exists:1 vapnik:2 false:2 supplement:1 hoped:1 iyer:1 kx:2 nk:1 diabetis:1 supf:3 simply:1 likely:2 bubeck:5 ncut:32 strogatz:2 springer:3 minimizer:4 satisfies:2 goal:3 formulated:1 replace:1 hard:4 determined:1 retract:2 uniformly:1 bcw:1 called:1 meaningful:1 atsch:2 formally:2 people:1 bioinformatics:1 constructive:1 evaluate:2 bench:1 biol:1 ex:2 |
2,596 | 3,354 | Loop Series and Bethe Variational Bounds
in Attractive Graphical Models
Erik B. Sudderth and Martin J. Wainwright
Electrical Engineering & Computer Science, University of California, Berkeley
sudderth@eecs.berkeley.edu, wainwrig@eecs.berkeley.edu
Alan S. Willsky
Electrical Engineering & Computer Science, Massachusetts Institute of Technology
willsky@mit.edu
Abstract
Variational methods are frequently used to approximate or bound the partition
or likelihood function of a Markov random field. Methods based on mean field
theory are guaranteed to provide lower bounds, whereas certain types of convex
relaxations provide upper bounds. In general, loopy belief propagation (BP) provides often accurate approximations, but not bounds. We prove that for a class of
attractive binary models, the so?called Bethe approximation associated with any
fixed point of loopy BP always lower bounds the true likelihood. Empirically,
this bound is much tighter than the naive mean field bound, and requires no further work than running BP. We establish these lower bounds using a loop series
expansion due to Chertkov and Chernyak, which we show can be derived as a
consequence of the tree reparameterization characterization of BP fixed points.
1
Introduction
Graphical models are widely used in many areas, including statistical machine learning, computer
vision, bioinformatics, and communications. Such applications typically require computationally
efficient methods for (approximately) solving various problems, including computing marginal distributions and likelihood functions. The variational framework provides a suite of candidate methods, including mean field approximations [3, 9], the sum?product or belief propagation (BP) algorithm [11, 14], Kikuchi and cluster variational methods [23], and related convex relaxations [21].
The likelihood or partition function of an undirected graphical model is of fundamental interest in
many contexts, including parameter estimation, error bounds in hypothesis testing, and combinatorial enumeration. In rough terms, particular variational methods can be understood as solving
optimization problems whose optima approximate the log partition function. For mean field methods, this optimal value is desirably guaranteed to lower bound the true likelihood [9]. For other
methods, including the Bethe variational problem underlying loopy BP [23], optima may either
over?estimate or under?estimate the truth. Although ?convexified? relaxations of the Bethe problem
yield upper bounds [21], to date the best known lower bounds on the partition function are based on
mean field theory. Recent work has studied loop series expansions [2, 4] of the partition function,
which generate better approximations but not, in general, bounds.
Several existing theoretical results show that loopy BP, and the corresponding Bethe approximation,
have desirable properties for graphical models with long cycles [15] or sufficiently weak dependencies [6, 7, 12, 19]. However, these results do not explain the excellent empirical performance
of BP in many graphs with short cycles, like the nearest?neighbor grids arising in spatial statistics
and low?level vision [3, 18, 22]. Such models often encode ?smoothness? priors, and thus have
attractive interactions which encourage connected variables to share common values. The first main
contribution of this paper is to demonstrate a family of attractive models for which the Bethe variational method always yields lower bounds on the true likelihood. Although we focus on models with
binary variables (but arbitrary order of interactions), we suspect that some ideas are more generally
applicable. For such models, these lower bounds are easily computed from any fixed point of loopy
BP, and empirically improve substantially on naive mean field bounds.
1
Our second main contribution lies in the route used to establish the Bethe lower bounds. In particular, Sec. 3 uses the reparameterization characterization of BP fixed points [20] to provide a simple
derivation for the loop series expansion of Chertkov and Chernyak [2]. The Bethe approximation
is the first term in this representation of the true partition function. Sec. 4 then identifies attractive models for which all terms in this expansion are positive, thus establishing the Bethe lower
bound. We conclude with empirical results demonstrating the accuracy of this bound, and discuss
implications for future analysis and applications of loopy BP.
2
Undirected Graphical Models
Given an undirected graph G = (V, E), with edges (s, t) ? E connecting n vertices s ? V , a graphical model associates each node with a random variable Xs taking values xs ? X . For pairwise
Markov random fields (MRFs) as in Fig. 1, the joint distribution of x := {xs | s ? V } is specified
via a normalized product of local compatibility functions:
Y
Y
1
p(x) =
?s (xs )
?st (xs , xt )
(1)
Z(?)
s?V
(s,t)?E
P
Q
Q
The partition function Z(?) := x?X n s ?s (xs )
(s,t) ?st (xs , xt ), whose value depends on
the compatibilities ?, is defined so that p(x) is properly normalized. We also consider distributions
defined by hypergraphs G = (V, C), where each hyperedge c ? C connects some subset of the
vertices (c ? V ). Letting xc := {xs | s ? c}, the corresponding joint distribution equals
Y
Y
1
?c (xc )
(2)
p(x) =
?s (xs )
Z(?)
c?C
s?V
Q
Q
P
where as before Z(?) =
x?X n
s ?s (xs )
c ?c (xc ). Such higher?order random fields are
conveniently described by the bipartite factor graphs [11] of Fig. 2.
In statistical physics, the partition function arises in the study of how physical systems respond to
changes in external stimuli or temperature [23]. Alternatively, when compatibility functions are
parameterized by exponential families [20], log Z(?) is the family?s cumulant generating function,
and thus intrinsically related to the model?s marginal statistics. For directed Bayesian networks
(which can be factored as in eq. (2)), Z(?) is the marginal likelihood of observed data, and plays a
central role in learning and model selection [9]. However, for general graphs coupling discrete random variables, the cost of exactly evaluating Z(?) grows exponentially with n [8]. Computationally
tractable families of bounds on the true partition function are thus of great practical interest.
2.1
Attractive Discrete Random Fields
In this paper, we focus on binary random vectors x ? {0, 1}n . We say that a pairwise MRF, with
compatibility functions ?st : {0, 1}2 ? R+, has attractive interactions if
?st (0, 0) ?st (1, 1) ? ?st (0, 1) ?st (1, 0)
(3)
for each edge (s, t) ? E. Intuitively, this condition requires all potentials to place greater weight
on configurations where neighboring variables take the same value. Our later analysis is based on
pairwise marginal distributions ?st (xs , xt ), which we parameterize as follows:
?s := E?st [Xs ]
1 ? ?s ? ?t + ?st ?t ? ?st
?st (xs , xt ) =
(4)
?s ? ?st
?st
?st := E?st [Xs Xt ]
We let E?st [?] denote expectation with respect to ?st (xs , xt ), so that ?st is the probability that
Xs = Xt = 1. This normalized matrix is attractive, satisfying eq. (3), if and only if ?st ? ?s ?t .
For binary variables, the pairwise MRF of eq. (1) provides one representation of a general, inhomogeneous Ising model. In the statistical physics literature, Ising models are typically expressed
by coupling random spins zs ? {?1, +1} with symmetric potentials log ?st (zs , zt ) = ?st zs zt . The
attractiveness condition of eq. (3) then becomes ?st ? 0, and the resulting model has ferromagnetic
interactions. Furthermore, pairwise MRFs satisfy the regularity condition of [10], and thus allow
tractable MAP estimation via graph cuts [5], if and only if they are attractive. Even for attractive
models, however, calculation of the partition function in non?planar graphs is #P?complete [8].
To define families of higher?order attractive potentials, we first consider a probability distribution
?c (xc ) on k = |c| binary variables. Generalizing eq. (4), we parameterize such distributions by the
2
following collection of 2k ? 1 mean parameters:
Y
?a := E?c
Xs
? 6= a ? c
(5)
s?a
For example, ?stu (xs , xt , xu ) would be parameterized by {?s , ?t , ?u , ?st , ?su , ?tu , ?stu }. For any
subset a ? c, we then define the following central moment statistic:
Y
?a := E?c
(Xs ? ?s )
? 6= a ? c
(6)
s?a
Note that ?s = 0, while ?st = Cov? (Xs , Xt ) = ?st ? ?s ?t . The third?order central moment then
equals the cumulant ?stu = ?stu ? ?st ?u ? ?su ?t ? ?tu ?s + 2?s ?t ?u .
Given these definitions, we say that a probability distribution ?c (xc ) is attractive if the central moments associated with all subsets a ? c of binary variables are non?negative (?a ? 0). Similarly, a
compatibility function ?c (xc ) is attractive if the probability distribution attained by normalizing its
values has non?negative central moments. For example, the following potential is easily shown to
satisfy this condition for all degrees k = |c|, and any scalar ?c > 0:
?c
x1 = x2 = ? ? ? = xk
log ?c (x1 , . . . , xk ) =
(7)
??c
otherwise
2.2 Belief Propagation and the Bethe Variational Principle
Many applications of graphical models require estimates of the posterior marginal distributions of
individual variables ?s (xs ) or factors ?c (xc ). Loopy belief propagation (BP) approximates these
marginals via a series of messages passed among nodes of the graphical model [14, 23]. Let ?(s)
denote the set of factors which depend on Xs , or equivalently the neighbors of node s in the corresponding factor graph. The BP algorithm then iterates the following message updates:
Y
X
Y
m
? sc (xs ) ? ?s (xs )
mds (xs )
mcs (xs ) ?
?c (xc )
m
? tc (xt ) (8)
xc\s
d??(s)\c
t?c\s
The left?hand expression updates the message m
? sc (xs ) passed from variable node s to factor c. New
outgoing messages mcs (xs ) from factor c to each s ? c are then determined by marginalizing the
incoming messages from other nodes. At any iteration, appropriately normalized products of these
messages define estimates of the desired marginals:
Y
Y
?s (xs ) ? ?s (xs )
mcs (xs )
?c (xc ) ? ?c (xc )
m
? tc (xt )
(9)
t?c
c??(s)
In tree?structured graphs, BP defines a dynamic programming recursion which converges to the
exact marginals after finitely many iterations [11, 14]. In graphs with cycles, however, convergence
is not guaranteed, and pseudo?marginals computed via eq. (9) are (often good) approximations.
A wide range of inference algorithms can be derived via variational approximations [9] to the true
partition function. Loopy BP is implicitly associated with the following Bethe approximation:
XX
XX
log Z? (?; ? ) =
?s (xs ) log ?s (xs ) +
?c (xc ) log ?c (xc )
?
s?V xs
c?C xc
XX
XX
?s (xs ) log ?s (xs ) ?
s?V xs
c?C xc
?c (xc ) log Q
?c (xc )
t?c ?t (xt )
(10)
Fixed points of loopy BP correspond toP
stationary points of this Bethe approximation [23], subject
to the local marginalization constraints xc\s ?c (xc ) = ?s (xs ).
3
Reparameterization and Loop Series Expansions
As discussed in Sec. 2.2, any BP fixed point is in one?to?one correspondence with a set {?s , ?c }
of pseudo?marginals associated with each of the graph?s nodes s ? V and factors c ? C. These
pseudo?marginals then lead to an alternative parameterization [20] of the factor graph of eq. (2):
Y
1 Y
? (x )
Q c c
p(x) =
?s (xs )
(11)
Z(? )
t?c ?t (xt )
s?V
c?C
For pairwise MRFs, the reparameterized compatibility functions equal ?st (xs , xt )/?s (xs )?t (xt ).
The BP algorithm effectively searches for reparameterizations which are tree?consistent, so that
3
?c (xc ) is the exact marginal distribution of Xc for any tree (or forest) embedded in the original
graph [20]. In later sections, we take expectations with respect to ?c (xc ) of functions f (xc ) defined over individual factors. Although these pseudo?marginals will in general not equal the true
marginals pc (xc ), BP fixed points ensure local consistency so that E?c [f (Xc )] is well?defined.
Using eq. (10), it is easily shown that the Bethe approximation Z? (? ; ? ) = 1 for any joint distribution defined by reparameterized potentials as in eq. (11). For simplicity, the remainder of this paper
focuses on reparameterized models of this form, and analyzes properties of the corresponding exact
partition function Z(? ). The resulting expansions and bounds are then related to the original MRF?s
partition function via the positive constant Z(?)/Z(? ) = Z? (?; ? ) of eq. (10).
Recently, Chertkov and Chernyak proposed a finite loop series expansion [2] of the partition function, whose first term coincides with the Bethe approximation. They provide two derivations: one
applies a trigonometric identity to Fourier representations of binary variables, while the second employs a saddle point approximation obtained via an auxiliary field of complex variables. The gauge
transformations underlying these derivations are a type of reparameterization, but their form is complicated by auxiliary variables and extraneous degrees of freedom. In this section, we show that the
fixed point characterization of eq. (11) leads to a more direct, and arguably simpler, derivation.
3.1 Pairwise Loop Series Expansions
We begin by developing a loop series expansion for pairwise MRFs. Given an undirected graph
G = (V, E), and some subset F ? E of the graph?s edges, let ds (F ) denote the degree (number of
neighbors) of node s in the subgraph induced by F . As illustrated in Fig. 1, any subset F for which
all nodes s ? V have degree ds (F ) 6= 1 defines a generalized loop [2]. The partition function for
any binary, pairwise MRF can then be expanded via an associated set of loop corrections.
Proposition 1. Consider a pairwise MRF defined on an undirected G = (V, E), with reparameterized potentials as in eq. (11). The associated partition function then equals
h
i
X
Y
Y
Z(? ) = 1 +
?F
E?s (Xs ? ?s )ds (F )
? F :=
?st
(12)
?6=F ?E
s?V
(s,t)?F
?st ? ?s ?t
Cov?st (Xs , Xt )
?st :=
=
?s (1 ? ?s )?t (1 ? ?t )
Var?s (Xs ) Var?t (Xt )
where only generalized loops F lead to non?zero terms in the sum of eq. (12), and
E?s (Xs ? ?s )d = ?s (1 ? ?s ) (1 ? ?s )d?1 + (?1)d (?s )d?1
are central moments of the binary variables at individual nodes.
(13)
(14)
Proof. To establish the expansion of eq. (12), we exploit the following polynomial representation of
reparameterized pairwise compatibility functions:
?st (xs , xt )
= 1 + ?st (xs ? ?s )(xt ? ?t )
(15)
?s (xs )?t (xt )
As verified in [17], this expression is satisfied for any (xs , xt ) ? {0, 1}2 if ?st is defined as in
eq. (13). For attractive models satisfying eq. (3), ?st ? 0 for allQedges. Using E?? [?] to denote
expectation with respect to the fully factorized distribution ??(x) = s ?s (xs ), we then have
X Y
Y ?st (xs , xt )
Z(? ) =
?s (xs )
?s (xs )?t (xt )
x?{0,1}n s?V
(s,t)?E
Y
Y
?st (Xs , Xt )
= E??
= E??
1 + ?st (Xs ? ?s )(Xt ? ?t )
(16)
?s (Xs )?t (Xt )
(s,t)?E
(s,t)?E
Expanding this polynomial via the expectation operator?s linearity, we recover one term for each
non?empty subset F ? E of the graph?s edges:
Y
X
Z(? ) = 1 +
E??
?st (Xs ? ?s )(Xt ? ?t )
(17)
?6=F ?E
(s,t)?F
The expression in eq. (12) then follows from the independence structure of ??(x), and standard
formulas for the moments of Bernoulli random variables. To evaluate these terms, note that if
ds (F ) = 1, it follows that E?s [Xs ? ?s ] = 0. There is thus one loop correction for each generalized
loop F , in which all connected nodes have degree at least two.
4
Figure 1: A pairwise MRF coupling ten binary variables (left), and the nine generalized loops in its loop series
expansion (right). For attractive potentials, two of the generalized loops may have negative signs (second &
third from right), while the core graph of Thm. 1 contains eight variables (far right).
Figure 1 illustrates the set of generalized loops associated with a particular pairwise MRF. These
loops effectively define corrections to the Bethe estimate Z(? ) ? 1 of the partition function for
reparameterized models. Tree?structured graphs do not contain any non?trivial generalized loops,
and the Bethe variational approximation is thus exact.
The loop expansion formulas of [2] can be precisely recovered by transforming binary variables to
a spin representation, and refactoring terms from the denominator of edge weights ?st to adjacent
vertices. Explicit computation of these loop corrections is in general intractable; for example, fully
connected graphs with n ? 5 nodes have more than 2n generalized loops. In some cases, accounting
for a small set of significant loop corrections may lead to improved approximations to Z(?) [4], or
more accurate belief estimates for LDPC codes [1]. We instead use the series expansion of Prop. 1
to establish analytic properties of BP fixed points.
3.2 Factor Graph Loop Series Expansions
We now extend the loop series expansion to higher?order MRFs defined on hypergraphs G = (V, C).
Let E = {(s, c) | c ? C, s ? c} denote the set of edges in the factor graph representation of this
MRF. As illustrated in Fig. 2, we define a generalized loop to be a subset F ? E of edges such that
all connected factor and variable nodes have degree at least two.
Proposition 2. Consider any factor graph G = (V, C) with reparameterized potentials as in
eq. (11), and associated edges E. The partition function then equals
h
i
X
Y
Y
Z(? ) = 1 +
?F
E?s (Xs ? ?s )ds (F )
? F :=
?ac (F )
(18)
?6=F ?E
s?V
c?C
Q
E?c s?a (Xs ? ?s )
?a
= Q
(19)
?a := Q
t?a ?t (1 ? ?t )
t?a Var?t (Xt )
where ac (F ) := {s ? c | (s, c) ? F } denotes the subset of variables linked to factor node c by the
edges in F . Only generalized loops F lead to non?zero terms in the sum of eq. (18).
Proof. As before, we employ a polynomial representation of the reparameterized factors in eq. (11):
X
Y
? (x )
Q c c
=1+
?a
(xs ? ?s )
(20)
t?c ?t (xt )
s?a
a?c,|a|?2
For factor graphs with attractive reparameterized potentials, the constant ?a ? 0 for all a ? c.
Note that this representation, which is derived in [17], reduces to that of eq. (15) when c = {s, t}.
Single?variable subsets are excluded in eq. (20) because ?s = E?s [Xs ? ?s ] = 0.
Applying eq. (20) as in our earlier derivation for pairwise MRFs (see eq. (16)), we may express the
partition function of the reparameterized factor graph as follows:
Y
Y
X
Y
?c (Xc )
Q
1+
Z(? ) = E??
= E??
?a
(Xs ? ?s )
(21)
t?c ?t (Xt )
s?a
c?C
c?C
?6=a?c
Note that ?a = 0 for any subset where |a| = 1. There is then a one?to?one correspondence between
variable node subsets a ? c, and subsets {(s, c) | s ? a} of the factor graph?s edges E. Expanding
this expression by F ? E, it follows that each factor c ? C contributes a term corresponding to the
chosen subset ac (F ) of its edges:
Y
X
Y
Z(? ) = 1 +
E??
?ac (F )
(Xs ? ?s )
(22)
?6=F ?E
c?C
s?ac (F )
Note that ?? = 1. Equation (18) then follows from the independence properties of ??(x). For a term
in this loop series to be non?zero, there must be no degree one variables, since E?s [Xs ? ?s ] = 0.
In addition, the definition of ?a implies that there can be no degree one factor nodes.
5
Figure 2: A factor graph (left) with three binary variables (circles) and four factor nodes (squares), and the
thirteen generalized loops in its loop series expansion (right, along with the full graph).
4
Lower Bounds in Attractive Binary Models
The Bethe approximation underlying loopy BP differs from mean field methods [9], which lower
bound the true log partition function Z(?), in two key ways. First, while the Bethe entropy (second
line of eq. (10)) is exact for tree?structured graphs, it approximates (rather than bounds) the true
entropy in graphs with cycles. Second, the marginalization condition imposed by loopy BP relaxes
(rather than strengthens) the global constraints characterizing valid distributions [21]. Nevertheless, we now show that for a large family of attractive graphical models, the Bethe approximation
Z? (?; ? ) of eq. (10) lower bounds Z(?). In contrast with mean field methods, these bounds hold
only at appropriate BP fixed points, not for arbitrarily chosen pseudo?marginals ?c (xc ).
4.1 Partition Function Bounds for Pairwise Graphical Models
Consider a pairwise MRF defined on G = (V, E), as in eq. (1). Let VH ? V denote the set of
nodes which either belong to some cycle in G, or lie on a path (sequence of edges) connecting two
cycles. We then define the core graph H = (VH , EH ) as the node?induced subgraph obtained by
discarding edges from nodes outside VH , so that EH = {(s, t) ? E | s, t ? VH }. The unique core
graph H underlying any graph G can be efficiently constructed by iteratively pruning degree one
nodes, or leaves, until all remaining nodes have two or more neighbors. The following theorem
identifies conditions under which all terms in the loop series expansion must be non?negative.
Theorem 1. Let H = (VH , EH ) be the core graph for a pairwise binary MRF, with attractive
potentials satisfying eq. (3). Consider any BP fixed point for which all nodes s ? VH with three or
more neighbors in H have marginals ?s ? 21 (or equivalently, ?s ? 12 ). The corresponding Bethe
variational approximation Z? (?; ? ) then lower bounds the true partition function Z(?).
Proof. It is sufficient to show that Z(? ) ? 1 for any reparameterized pairwise MRF, as in eq. (11).
From eq. (9), note that loopy BP estimates the pseudo?marginal ?st (xs , xt ) via the product of
?st (xs , xt ) with message functions of single variables. For this reason, attractive pairwise compatibilities always lead to BP fixed points with attractive pseudo?marginals satisfying ?st ? ?s ?t .
Consider the pairwise loop series expansion of eq. (12). As shown
(13), attractive
models
Q by eq.
lead to edge weights ?st ? 0. It is thus sufficient to show that s E?s (Xs ? ?s )ds (F ) ? 0 for
each generalized loop F ? E. Suppose first that the graph has a single cycle, and thus exactly one
non?zero generalized loop F . Because
all connected nodes in this cycle have degree two, the bound
follows because E?s (Xs ? ?s )2 ? 0. More generally, we clearly have Z(? ) ? 1 in graphs where
every generalized loop F associates an even number of neighbors ds (F ) with each node.
Focusing
on generalized
loops containing nodes with odd degree d ? 3, eq. (14) implies that
E?s (Xs ? ?s )d ? 0 for marginals satisfying 1 ? ?s ? ?s . For BP fixed points in which ?s ? 21
for all nodes, we thus have Z(? ) ? 1. In particular, the symmetric fixed point ?s = 12 leads to uniformly positive generalized loop corrections. More generally, the marginals of nodes s for which
ds (F ) ? 2 for every generalized loop F do not influence the expansion?s positivity. Theorem 1
discards these nodes by examining the topology of the core graph H (see Fig. 1 for an example).
For fixed points where ?s ? 21 for all nodes, we rewrite the polynomial in the loop expansion of
eq. (15) as (1 + ?st (?s ? xs )(?t ? xt )), and employ an analogous line of reasoning.
In addition to establishing Thm. 1, our arguments show that the true partition function monotonically
increases as additional edges, with attractive reparameterized potentials as in eq. (11), are added to
a graph with fixed pseudo?marginals ?s ? 21 . For such models, the accumulation of particular
loop corrections, as explored by [4], produces a sequence of increasingly tight bounds on Z(?). In
addition, we note that the conditions required by Thm. 1 are similar to those underlying classical
6
correlation inequalities [16] from the statistical physics literature. Indeed, the Griffiths?Kelly?
Sherman (GKS) inequality leads to an alternative proof in cases where ?s = 21 for all nodes.
For attractive Ising models in which some nodes have marginals ?s > 12 and others ?t < 21 , the loop
series expansion may contain negative terms. For small graphs like that in Fig. 1, it is possible to
use upper bounds on the edge weights ?st , which follow from ?st ? min(?s , ?t ), to cancel negative
loop corrections with larger positive terms. As confirmed by the empirical results in Sec. 4.3, the
lower bound Z(?) ? Z? (?; ? ) thus continues to hold for many (perhaps all) attractive Ising models
with less homogeneous marginal biases.
4.2 Partition Function Bounds for Factor Graphs
Given a factor graph G = (V, C) relating binary variables, define a core graph H = (VH , CH ) by
excluding variable and factor nodes which are not members of any generalized loops. As in Sec. 2.2,
let ?(s) denote the set of factor nodes neighboring variable node s in the core graph H.
Theorem 2. Let H = (VH , CH ) be the core graph for a binary factor graph, and consider an
attractive BP fixed point for which one of the following conditions holds:
(i) ?s ?
(ii) ?s ?
1
2
1
2
for all nodes s ? VH with |?(s)| ? 3, and ?a ? 0 for all a ? c, c ? CH .
for all nodes s ? VH with |?(s)| ? 3, and (?1)|a| ?a ? 0 for all a ? c, c ? CH .
The Bethe approximation Z? (?; ? ) then lower bounds the true partition function Z(?).
For the case where ?s ? 21 , the proof of this theorem is a straightforward generalization of the
arguments in Sec. 4.1. When ?s ? 21 , we replace all (xs ? ?s ) terms by (?s ? xs ) in the expansion
of eq. (20), and again recover uniformly positive loop corrections.
For any given BP fixed point, the conditions of Thm. 2 are easy to verify. For factor graphs, it is
more challenging to determine which compatibility functions ?c (xc ) necessarily lead to attractive
fixed points. For symmetric potentials as in eq. (7), however, one can show that the conditions on
?a , a ? c are necessarily satisfied whenever all variable nodes s ? VH have the same bias.
4.3 Empirical Comparison of Mean Field and Bethe Lower Bounds
In this section, we compare the accuracy of the Bethe variational bounds established by Thm. 1
to those produced by a naive, fully factored mean field approximation [3, 9]. Using the
spin representation zs ? {?1, +1}, we examine Ising models with attractive pairwise potentials
log ?st (zs , zt ) = ?st zs zt of varying strengths ?st ? 0. We first examine a 2D torus, with potentials
of uniform strength ?st = ?? and no local observations. For such MRFs, the exact partition function may be computed via Onsager?s classical eigenvector method [13]. As shown in Fig. 3(a), for
? only two
moderate ?? the Bethe bound Z? (?; ? ) is substantially tighter than mean field. For large ?,
?
states (all spins ?up? or ?down?) have significant probability, so that Z(?) ? 2 exp(?|E|).
In this
regime, loopy BP exhibits ?symmetry breaking? [6], and converges to one of these states at random
?
As verified in Fig. 3(a), as ?? ? ? the difference
with corresponding bound Z? (?; ? ) ? exp(?|E|).
log Z(?) ? log Z? (?; ? ) ? log 2 ? 0.69 thus remains bounded.
We also consider a set of random 10 ? 10 nearest?neighbor
grids, with inhomogeneous pairwise
? 2 , and observation potentials log ?s (zs ) = ?s zs ,
potentials sampled
according
to
|?
|
?
N
0,
?
st
? we sample 100 random MRFs, and plot the average differ|?s | ? N 0, 0.12 . For each candidate ?,
ence log Z? (?; ? ) ? log Z(?) between the true partition function and the BP (or mean field) fixed
point reached from a random initialization. Fig. 3(b) first considers MRFs where ?s > 0 for all
nodes, so that the conditions of Thm. 1 are satisfied for all BP fixed points. For these models, the
Bethe bound is extremely accurate. In Fig. 3(c), we also consider MRFs where the observation
potentials ?s are of mixed signs. Although this sometimes leads to BP fixed points with negative
associated loop corrections, the Bethe variational approximation nevertheless always lower bounds
the true partition function in these examples. We hypothesize that this bound in fact holds for all
attractive, binary pairwise MRFs, regardless of the observation potentials.
5
Discussion
We have provided an alternative, direct derivation of the partition function?s loop series expansion,
based on the reparameterization characterization of BP fixed points. We use this expansion to prove
that the Bethe approximation lower bounds the true partition function in a family of binary attractive
7
?10
?20
?30
?40
?50
?60
Belief Propagation
Mean Field
0.2
0.4
0.6
Edge Strength
(a)
0.8
1
Difference from True Log Partition
0
?70
0
2
0.5
Difference from True Log Partition
Difference from True Log Partition
10
0
?0.5
?1
?1.5
?2
?2.5
?3
0
Belief Propagation
Mean Field
0.2
0.4
0.6
Edge Strength
(b)
0.8
1
0
?2
?4
?6
Belief Propagation
Mean Field
?8
0
0.2
0.4
0.6
Edge Strength
0.8
1
(c)
Figure 3: Bethe (dark blue, top) and naive mean field (light green, bottom) lower bounds on log Z(?) for three
families of attractive, pairwise Ising models. (a) 30 ? 30 torus with no local observations and homogeneous
potentials. (b) 10 ? 10 grid with random, inhomogeneous potentials and all pseudo?marginals ?s > 12 , satisfying the conditions of Thm. 1. (c) 10 ? 10 grid with random, inhomogeneous potentials and pseudo?marginals
of mixed biases. Empirically, the Bethe lower bound also holds for these models.
models. These results have potential implications for the suitability of loopy BP in approximate
parameter estimation [3], as well as its convergence dynamics. We are currently exploring generalizations of our results to other families of attractive, or ?nearly? attractive, graphical models.
Acknowledgments The authors thank Yair Weiss for suggesting connections to loop series expansions,
and helpful conversations. Funding provided by Army Research Office Grant W911NF-05-1-0207, National
Science Foundation Grant DMS-0528488, and NSF Career Grant CCF-0545862.
References
[1] M. Chertkov and V. Y. Chernyak. Loop calculus helps to improve belief propagation and linear programming decodings of low density parity check codes. In Allerton Conf., 2006.
[2] M. Chertkov and V. Y. Chernyak. Loop series for discrete statistical models on graphs. J. Stat. Mech.,
2006:P06009, June 2006.
[3] B. J. Frey and N. Jojic. A comparison of algorithms for inference and learning in probabilistic graphical
models. IEEE Trans. PAMI, 27(9):1392?1416, Sept. 2005.
[4] V. G?omez, J. M. Mooij, and H. J. Kappen. Truncating the loop series expansion for BP. JMLR, 8:1987?
2016, 2007.
[5] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images.
J. R. Stat. Soc. B, 51(2):271?279, 1989.
[6] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Comp., 16:2379?2413,
2004.
[7] A. T. Ihler, J. W. Fisher, and A. S. Willsky. Loopy belief propagation: Convergence and effects of message
errors. JMLR, 6:905?936, 2005.
[8] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM J.
Comput., 22(5):1087?1116, Oct. 1993.
[9] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Machine Learning, 37:183?233, 1999.
[10] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? IEEE Trans.
PAMI, 26(2):147?159, Feb. 2004.
[11] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor graphs and the sum?product algorithm. IEEE
Trans. IT, 47(2):498?519, Feb. 2001.
[12] J. M. Mooij and H. J. Kappen. Sufficient conditions for convergence of loopy belief propagation. In UAI
21, pages 396?403. AUAI Press, 2005.
[13] L. Onsager. Crystal statistics I: A two?dimensional model with an order?disorder transition. Physical
Review, 65:117?149, 1944.
[14] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman, San Mateo, 1988.
[15] T. J. Richardson and R. L. Urbanke. The capacity of low-density parity-check codes under messagepassing decoding. IEEE Trans. IT, 47(2):599?618, Feb. 2001.
[16] S. B. Shlosman. Correlation inequalities and their applications. J. Math. Sci., 15(2):79?101, Jan. 1981.
[17] E. B. Sudderth, M. J. Wainwright, and A. S. Willsky. Loop series and Bethe variational bounds in attractive
graphical models. UC Berkeley, EECS department technical report, in preparation, 2008.
[18] M. F. Tappen and W. T. Freeman. Comparison of graph cuts with belief propagation for stereo, using
identical MRF parameters. In ICCV, volume 2, pages 900?907, 2003.
[19] S. C. Tatikonda and M. I. Jordan. Loopy belief propagation and Gibbs measures. In UAI 18, pages
493?500. Morgan Kaufmann, 2002.
[20] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree?based reparameterization framework for analysis of sum?product and related algorithms. IEEE Trans. IT, 49(5):1120?1146, May 2003.
[21] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition
function. IEEE Trans. IT, 51(7):2313?2335, July 2005.
[22] Y. Weiss. Comparing the mean field method and belief propagation for approximate inference in MRFs.
In D. Saad and M. Opper, editors, Advanced Mean Field Methods. MIT Press, 2001.
[23] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and generalized
belief propagation algorithms. IEEE Trans. IT, 51(7):2282?2312, July 2005.
8
| 3354 |@word polynomial:5 calculus:1 accounting:1 kappen:2 moment:6 configuration:1 series:23 contains:1 loeliger:1 wainwrig:1 existing:1 recovered:1 comparing:1 must:2 partition:34 analytic:1 hypothesize:1 plot:1 update:2 stationary:1 leaf:1 parameterization:1 xk:2 short:1 core:8 provides:3 characterization:4 node:38 iterates:1 math:1 allerton:1 simpler:1 along:1 constructed:1 direct:2 prove:2 pairwise:24 indeed:1 frequently:1 examine:2 freeman:2 enumeration:1 becomes:1 begin:1 xx:4 underlying:5 linearity:1 bounded:1 factorized:1 provided:2 what:1 kaufman:1 substantially:2 eigenvector:1 z:8 transformation:1 onsager:2 suite:1 pseudo:10 berkeley:4 every:2 auai:1 exactly:2 grant:3 arguably:1 positive:5 before:2 engineering:2 understood:1 local:5 frey:2 consequence:1 chernyak:5 establishing:2 path:1 approximately:1 pami:2 initialization:1 studied:1 mateo:1 challenging:1 range:1 directed:1 practical:1 unique:1 acknowledgment:1 testing:1 differs:1 mech:1 jan:1 area:1 empirical:4 griffith:1 selection:1 operator:1 context:1 applying:1 influence:1 accumulation:1 map:1 imposed:1 straightforward:1 regardless:1 truncating:1 convex:2 simplicity:1 disorder:1 factored:2 reparameterization:6 analogous:1 play:1 suppose:1 exact:7 programming:2 homogeneous:2 us:1 hypothesis:1 associate:2 satisfying:6 strengthens:1 tappen:1 continues:1 cut:3 ising:7 observed:1 role:1 bottom:1 electrical:2 parameterize:2 ferromagnetic:1 cycle:8 connected:5 transforming:1 reparameterizations:1 dynamic:2 depend:1 solving:2 rewrite:1 tight:1 bipartite:1 easily:3 joint:3 various:1 kolmogorov:1 derivation:6 sc:2 outside:1 whose:3 widely:1 larger:1 say:2 otherwise:1 statistic:4 cov:2 jerrum:1 richardson:1 sequence:2 interaction:4 product:6 remainder:1 neighboring:2 tu:2 loop:51 date:1 trigonometric:1 subgraph:2 convergence:4 cluster:1 optimum:2 regularity:1 empty:1 produce:1 generating:1 converges:2 kikuchi:1 help:1 coupling:3 ac:5 stat:2 finitely:1 nearest:2 odd:1 eq:37 soc:1 auxiliary:2 implies:2 differ:1 inhomogeneous:4 require:2 generalization:2 suitability:1 proposition:2 tighter:2 exploring:1 correction:10 hold:5 sufficiently:1 exp:2 great:1 uniqueness:1 estimation:4 applicable:1 combinatorial:1 currently:1 tatikonda:1 gauge:1 mit:2 rough:1 clearly:1 always:4 rather:2 varying:1 jaakkola:3 office:1 encode:1 derived:3 focus:3 june:1 properly:1 bernoulli:1 likelihood:7 check:2 contrast:1 helpful:1 inference:3 posteriori:1 mrfs:12 typically:2 compatibility:9 among:1 extraneous:1 spatial:1 uc:1 marginal:8 field:23 equal:6 identical:1 cancel:1 nearly:1 future:1 minimized:1 others:1 stimulus:1 intelligent:1 report:1 employ:3 national:1 individual:3 stu:4 connects:1 freedom:1 interest:2 message:8 pc:1 light:1 implication:2 accurate:3 edge:19 encourage:1 tree:7 urbanke:1 desired:1 circle:1 theoretical:1 earlier:1 ence:1 w911nf:1 loopy:18 cost:1 vertex:3 subset:13 uniform:1 examining:1 dependency:1 eec:3 st:53 density:2 fundamental:1 siam:1 probabilistic:2 physic:3 decoding:2 connecting:2 again:1 central:6 satisfied:3 containing:1 positivity:1 sinclair:1 external:1 conf:1 suggesting:1 potential:22 sec:6 satisfy:2 depends:1 later:2 linked:1 reached:1 recover:2 complicated:1 contribution:2 square:1 spin:4 accuracy:2 kaufmann:1 efficiently:1 yield:2 correspond:1 weak:1 bayesian:1 produced:1 mc:3 confirmed:1 comp:1 explain:1 whenever:1 definition:2 energy:2 dm:1 associated:9 proof:5 ihler:1 sampled:1 massachusetts:1 intrinsically:1 conversation:1 focusing:1 higher:3 attained:1 follow:1 planar:1 improved:1 wei:3 furthermore:1 correlation:2 until:1 hand:1 d:8 su:2 propagation:15 defines:2 perhaps:1 grows:1 effect:1 normalized:4 true:18 contain:2 verify:1 ccf:1 excluded:1 symmetric:3 iteratively:1 jojic:1 illustrated:2 attractive:34 adjacent:1 coincides:1 generalized:19 crystal:1 complete:1 demonstrate:1 temperature:1 reasoning:2 image:1 variational:15 recently:1 funding:1 common:1 empirically:3 physical:2 exponentially:1 volume:1 discussed:1 hypergraphs:2 approximates:2 extend:1 marginals:17 belong:1 relating:1 significant:2 gibbs:1 smoothness:1 grid:4 consistency:1 similarly:1 heskes:1 sherman:1 convexified:1 feb:3 posterior:1 recent:1 moderate:1 discard:1 route:1 certain:1 inequality:3 binary:19 hyperedge:1 arbitrarily:1 morgan:2 analyzes:1 greater:1 additional:1 determine:1 monotonically:1 july:2 ii:1 full:1 desirable:1 reduces:1 alan:1 technical:1 calculation:1 long:1 mrf:12 denominator:1 vision:2 expectation:4 iteration:2 sometimes:1 whereas:1 addition:3 sudderth:3 appropriately:1 saad:1 suspect:1 subject:1 induced:2 undirected:5 member:1 jordan:2 p06009:1 easy:1 relaxes:1 marginalization:2 independence:2 topology:1 greig:1 idea:1 expression:4 passed:2 stereo:1 nine:1 generally:3 gks:1 dark:1 ten:1 zabih:1 generate:1 nsf:1 sign:2 arising:1 blue:1 discrete:3 express:1 key:1 four:1 demonstrating:1 nevertheless:2 verified:2 graph:48 relaxation:3 sum:5 parameterized:2 respond:1 place:1 family:9 bound:47 guaranteed:3 correspondence:2 strength:5 constraint:2 precisely:1 bp:36 x2:1 fourier:1 argument:2 min:1 extremely:1 expanded:1 martin:1 structured:3 developing:1 according:1 department:1 increasingly:1 intuitively:1 iccv:1 computationally:2 equation:1 remains:1 discus:1 desirably:1 letting:1 tractable:2 yedidia:1 eight:1 appropriate:1 alternative:3 yair:1 original:2 top:2 running:1 ensure:1 denotes:1 remaining:1 graphical:14 porteous:1 xc:28 exploit:1 ghahramani:1 establish:4 classical:2 added:1 md:1 exhibit:1 thank:1 sci:1 capacity:1 considers:1 trivial:1 reason:1 willsky:6 erik:1 code:3 ldpc:1 equivalently:2 thirteen:1 negative:7 zt:4 upper:4 observation:5 markov:2 finite:1 reparameterized:12 communication:1 excluding:1 arbitrary:1 thm:7 required:1 specified:1 connection:1 california:1 established:1 pearl:1 trans:7 regime:1 including:5 green:1 belief:16 wainwright:4 eh:3 recursion:1 advanced:1 improve:2 technology:1 identifies:2 naive:4 sept:1 vh:11 prior:1 literature:2 kelly:1 review:1 mooij:2 marginalizing:1 embedded:1 fully:3 mixed:2 var:3 foundation:1 degree:11 sufficient:3 consistent:1 principle:1 editor:1 share:1 parity:2 free:1 bias:3 allow:1 institute:1 neighbor:7 wide:1 taking:1 characterizing:1 saul:1 opper:1 evaluating:1 valid:1 transition:1 author:1 collection:1 san:1 far:1 approximate:4 pruning:1 implicitly:1 global:1 incoming:1 uai:2 conclude:1 alternatively:1 search:1 bethe:30 expanding:2 kschischang:1 messagepassing:1 symmetry:1 contributes:1 forest:1 career:1 expansion:26 excellent:1 complex:1 necessarily:2 constructing:1 main:2 refactoring:1 xu:1 x1:2 fig:10 attractiveness:1 explicit:1 torus:2 exponential:1 comput:1 candidate:2 lie:2 breaking:1 jmlr:2 third:2 chertkov:5 formula:2 theorem:5 down:1 xt:33 discarding:1 explored:1 x:73 normalizing:1 intractable:1 effectively:2 illustrates:1 entropy:2 generalizing:1 tc:2 saddle:1 army:1 conveniently:1 expressed:1 omez:1 scalar:1 applies:1 ch:4 truth:1 prop:1 oct:1 identity:1 replace:1 fisher:1 change:1 determined:1 uniformly:2 called:1 arises:1 cumulant:2 bioinformatics:1 preparation:1 evaluate:1 outgoing:1 seheult:1 |
2,597 | 3,355 | Sequential Hypothesis Testing under Stochastic
Deadlines
Peter I. Frazier
ORFE
Princeton University
Princeton, NJ 08544
pfrazier@princeton.edu
Angela J. Yu
CSBMB
Princeton University
Princeton, NJ 08544
ajyu@princeton.edu
Abstract
Most models of decision-making in neuroscience assume an infinite horizon,
which yields an optimal solution that integrates evidence up to a fixed decision
threshold; however, under most experimental as well as naturalistic behavioral
settings, the decision has to be made before some finite deadline, which is often
experienced as a stochastic quantity, either due to variable external constraints or
internal timing uncertainty. In this work, we formulate this problem as sequential
hypothesis testing under a stochastic horizon. We use dynamic programming tools
to show that, for a large class of deadline distributions, the Bayes-optimal solution
requires integrating evidence up to a threshold that declines monotonically over
time. We use numerical simulations to illustrate the optimal policy in the special
cases of a fixed deadline and one that is drawn from a gamma distribution.
1 Introduction
Major strides have been made in understanding the detailed dynamics of decision making in simple two-alternative forced choice (2AFC) tasks, at both the behavioral and neural levels. Using a
combination of probabilistic and dynamic programming tools, it has been shown that when the decision horizon is infinite (i.e. no deadline), the optimal policy is to accumulate sensory evidence for
one alternative versus the other until a fixed threshold, and report the corresponding hypothesis [1].
Under similar experimental conditions, it appears that humans and animals accumulate information
and make perceptual decisions in a manner close to this optimal strategy [2?4], and that neurons
in the posterior parietal cortex exhibit response dynamics similar to that prescribed by the optimal
algorithm [6]. However, in most 2AFC experiments, as well as in more natural behavior, the decision has to be made before some finite deadline. This corresponds to a finite-horizon sequential
decision problem. Moreover, there is variability associated with that deadline either due to external
variability associated with the deadline imposition itself, or due to internal timing uncertainty about
how much total time is allowed and how much time has already elapsed. In either case, with respect
to the observer?s internal timer, the deadline can be viewed as a stochastic quantity.
In this work, we analyze the optimal strategy and its dynamics for decision-making under the pressure of a stochastic deadline. We show through analytical and numerical analysis that the optimal
policy is a monotonically declining decision threshold over time. A similar result for deterministic deadlines was shown in [5]. Declining decision thresholds have been used in [7] to model the
speed vs. accuracy tradeoff, and also in the context of sequential hypothesis testing ( [8]). We first
present a formal model of the problem, as well as the main theoretical results (Sec. 2). We then use
numerical simulations to examine the optimal policy in some specific examples (Sec. 3).
2 Decision-making under a Stochastic Deadline
We assume that on each trial, a sequence of i.i.d inputs are observed: x1 , x2 , x3 , . . .. With probability
p0 , all the inputs for the trial are generated from a probability density f1 , and, with probability
1
1 ? p0 , they are generated from an alternate probability density f0 . Let ? be index of the generating
distribution. The objective is to decide whether ? is 0 or 1 quickly and accurately, while also under
the pressure of a stochastic decision deadline.
We define xt , (x1 , x2 , . . . , xt ) to be the vector of observations made by time t. This vector of
observations gives information about the generating density ?. Defining pt , P{? = 1 | xt }, we
observe that pt+1 may be obtained iteratively from pt via Bayes? rule,
pt+1 = P{? = 1 | xt+1 } =
pt f1 (xt+1 )
.
pt f1 (xt+1 ) + (1 ? pt )f0 (xt+1 )
(1)
Let D be a deadline drawn from a known distribution that is independent of the observations xt . We
will assume that the deadline D is observed immediately and effectively terminates the trial. Let
c > 0 be the cost associated with each unit time of decision delay, and d ? .5 be the cost associated
with exceeding the deadline, where both c and d are normalized against the (unit) cost of making
an incorrect decision. We choose d ? .5 so that d is never smaller than the expected penalty for
guessing at ?. This avoids situations in which we prefer to exceed the deadline.
A decision-policy ? is a sequence of mappings, one for each time t, from the observations so far
to the set of possible actions: stop and choose ? = 0; stop and choose ? = 1; or continue sampling. We define ?? to be the time when the decision is made to stop sampling under decision-policy
?, and ?? to be the hypothesis chosen at this time ? both are random variables dependent on the
sequence of observations. More formally, ? , ? 0 , ? 1 , . . ., where ? t (xt ) 7? {0, 1, continue},
and ?? , min(D, inf{t ? N : ? t (xt ) ? {0, 1}}), ?? , ? ?? (x?? ). We may also define
?? , inf{t ? N : ? t (xt ) ? {0, 1}} to be the time when the policy would choose to stop sampling if the deadline were to fail to occur. Then ?? = min(D, ?? ).
Our loss function is defined to be l(?, ?; ?, D) = 1{?6=?} 1{? <D} + c? + d1{? ?D} . The goal is to
find a decision-policy ? which minimizes the total expected loss
L? , hl(?? , ?? ; ?, D)i?,D,x = P(?? 6= ?, ?? < D) + ch?? i + d P(D ? ?? ).
(2)
2.1 Dynamic Programming
A decision policy is characterized by how ? and ? are generated as a function of the data observed
so far. Thus, finding the optimal decision-policy is equivalent to finding the random variables ? and
? that minimize hl(?, ?; ?, D)i. The optimal policy decides whether or not to stop based on whether
pt is inside a set C t ? [0, 1] or not. Our goal is to show that C t is a continuous interval, that
C t+1 ? C t , and that for large enough t, C t is empty. That is, the optimal policy is to iteratively
compute pt based on incoming data, and to decide for the respective hypothesis as soon as it hits
either a high (? = 1) or low (? = 0) threshold. Furthermore, the two thresholds decay toward each
other over time and eventually meet.
We will use tools from dynamic programming to analyze this problem. Our approach is illustrated
in Fig. 2.1. The red line denotes the cost of stopping at time t as a function of the current belief
pt = p. The blue line denotes the cost of continuing at least one more time step, as a function of
pt . The black line denotes the cost of continuing at least two more time steps, as a function of pt .
Because the cost of continuing is concave in pt (Lemma 1), and larger than stopping for pt ? {0, 1}
(Lemma 4), the continuation region is an interval delimited by where the costs of continuing and
stopping intersect (blue dashed lines). Moreover, because the cost of continuing two more timesteps
is always larger than that of continuing one more for a given amount of belief (Lemmas 2 and 3),
that ?window? of continuation narrows over time (Main Theorem). This method of proof parallels
that of optimality for the classic sequential probability ratio test in [10].
Before proving the lemmas and the theorem, we first introduce some additional definitions. The
value function V : N ? [0, 1] 7? R+ specifies the minimal cost (incurred by the optimal policy) at
time t, given that the deadline has not yet occurred, that xt have been observed, and that the current
cumulative evidence for ? = 1 is pt : V (t, pt ) , inf ? ?t,? hl(?, ?; ?, D) | D > t, pt i?,D,x . The cost
associated with continuing at time t, known as the Q-factor for continuing and denoted by Q, takes
the form
Q(t, pt ) , inf hl(?, ?; ?, D) | D > t, pt i?,D,x .
(3)
? ?t+1,?
2
Continuing vs. Stopping
0.5
Cost
0.4
0.3
0.2
0.1
0
0
Q(t + 1, p) ? c
Q(t, p)
? p)
Q(t,
0.5
1
pt = p
Figure 1: Comparison of the cost Q(t, p) of stopping at time t (red); the cost Q(t, p) of continuing
at time t (blue solid line); and Q(t + 1, p) ? c (black solid line), which is the cost of continuing at
time t + 1 minus an adjustment Q(t + 1, p) ? Q(t, p) = c. The continuation region C t is the interval
between the intersections of the solid blue and red lines, marked by the blue dotted lines, and the
continuation region C t+1 is the interval between the intersections of the solid black and red lines,
marked by the black dotted lines. Note that Q(t + 1, p) ? c ? Q(t, p), so C t contains C t+1 .
Note that, in general, both V (t, pt ) and Q(t, pt ) may be difficult to compute due to the need to
optimize over infinitely many decision policies. Conversely, the cost associated with stopping at
time t, known as the Q-factor for stopping and denoted by Q, is easily computed as
Q(t, pt ) = inf hl(t, ?; ?, D) | D > t, pt i?,D,x = min{pt , 1 ? pt } + ct,
(4)
?=0,1
where the infimum is obtained by choosing ? = 0 if pt ? .5 and choosing ? = 1 otherwise.
An optimal stopping rule is to stop the first time the expected cost of continuing exceeds that of
stopping, and to choose ? = 0 or ? = 1 to minimize the probability of error given the accumulated
evidence (see [10]). That is, ? ? = inf{t ? 0 : Q(t, pt ) ? Q(t, pt )} and ? ? = 1{p? ? ?1/2} .
We define the continuation region at time t by C t , pt ? [0, 1] : Q(t, pt ) > Q(t, pt ) so that
? ? = inf{t ? 0 : pt ?
/ C t }. Although we have obtained an expression for the optimal policy in
terms of Q(t, p) and Q(t, p), computing Q(t, p) is difficult in general.
Lemma 1. The function p 7? Q(t, pt ) is concave with respect to pt for each t ? N.
Proof. We may restrict the infimum in Eq. 3 to be over only those ? and ? depeding on D and
the future observations xt+1 , {xt+1 , xt+2 , . . .}. This is due to two facts. First, the expectation
is conditioned on pt , which contains all the information about ? available in the past observations
xt , and makes it unnecessary for the optimal policy to depend on xt except through pt . Second,
dependence on pt in the optimal policy may be made implicit by allowing the infimum to be attained
by different ? and ? for different values of pt but removing explicit dependence on pt from the
individual policies over which the infimum is taken. With ? and ? chosen from this restricted set
of policies, we note that the distribution of the future observations xt+1 is entirely determined by ?
and so we have hl(?, ?; ?, D) | ?, pt iD,xt+1 = hl(?, ?; ?, D) | ?iD,xt+1 . Summing over the possible
values of ?, we may then write:
X
hl(?, ?; ?, D) | pt i?,D,xt+1 =
hl(?, ?; ?, D) | ? = kiD,xt+1 P{? = k | pt }
k?{0,1}
= hl(?, ?; ?, D) | ? = 0iD,xt+1 (1 ? pt ) + hl(?, ?; ?, D) | ? = 1iD,xt+1 pt .
Eq. (3) can then be rewritten as:
Q(t, pt ) = inf hl(?, ?; ?, D) | ? = 0iD,xt+1 (1 ? pt ) + hl(?, ?; ?, D) | ? = 1iD,xt+1 pt ,
? ?t+1,?
where this infimum is again understood to be taken over this set of policies depending only upon
observations after time t. Since neither hl(?, ?; ?, D) | ? = 0i nor hl(?, ?; ?, D) | ? = 1i depend on
pt , this is the infimum of a collection of linear functions in pt , and hence is concave in pt ( [9]).
3
We now need a lemma describing how expected cost depends on the distribution of the deadline. Let
D? be a deadline whose distribution is different than that of D. Let ? ? be the policy that is optimal
given that the deadline has distribution D, and denote ??? by ? ? . Then define
?
?
V ? (t, pt ) , hmin(p? , 1 ? p? )1{?? <D? } + c min(? ? , D? ) + d1{?? ?D? } | pt , D? > ti?,D,x
so that V ? gives the expected cost of taking the stopping time ? ? which is optimal for deadline D
?
and applying it to the situation with deadline D? . Similarly, let Q? (t, pt ) and Q (t, pt ) denote the
?
?
corresponding expected costs under ? and D given that we continue or stop, respectively, at time
?
t given pt and D? > t. Note that Q (t, pt ) = Q(t, pt ) = min(pt , 1 ? pt ) + ct. These definitions
are the basis for the following lemma, which essentially shows that replacing the deadline D which
a less urgent deadline D? lowers cost. This lemma is needed for Lemma 3 below.
Lemma 2 If D? is such that P{D? > t + 1 | D? > t} ? P{D > t + 1 | D > t} for all t, then
V ? (t, p) ? V (t, p) and Q? (t, p) ? Q(t, p) for all t and p.
Proof. First let us show that if we have V ? (t + 1, p? ) ? V (t + 1, p? ) for some fixed t and all p? , then
we also have Q? (t, p) ? Q(t, p) for that same t and all p. This is the case because, if we fix t, then
Q(t, pt ) = (d + c(t + 1)) P{D = t+1 | D > t} + hV (t + 1, pt+1 ) | pt ixt+1 P{D > t+1 | D > t}
= d + c(t + 1) + hV (t + 1, pt+1 ) ? (d + c(t + 1)) | pt ixt+1 P{D > t+1 | D > t}
? d + c(t + 1) + hV (t + 1, pt+1 ) ? (d + c(t + 1)) | pt ixt+1 P{D? > t+1 | D? > t}
? d + c(t + 1) + hV ? (t + 1, pt+1 ) ? (d + c(t + 1)) | pt ixt+1 P{D? > t+1 | D? > t} = Q? (t, p).
In the first inequality we have used two facts: that V (t + 1, pt+1 ) ? Q(t + 1, pt+1 ) =
min(pt+1 , 1 ? pt+1 ) + c(t + 1) ? d + c(t + 1) (which is true because d ? .5); and that
P{D > t + 1 | D > t} ? P{D? > t + 1 | D? > t}. In the second inequality we have used our
assumption that V ? (t + 1, p? ) ? V (t + 1, p? ) for all p? .
Now consider a finite horizon version of the problem where ? ? is only optimal among stopping
times bounded above by a finite integer T . We will show the lemma for this case, and the lemma for
the infinite horizon version of the problem follows by taking the limit as T ? ?.
We induct backwards on t. Since ? ? is required to stop at T , we have V (T, pT ) = Q(T, pT ) =
?
Q (T, pT ) = V ? (T, pT ). Now for the induction step. Fix p and t < T . If ? ? chooses to stop
?
at t when pt = p, then V (t, p) = Q(t, p) = Q (t, p) = V ? (t, p). If ? ? continues instead, then
V (t, p) = Q(t, p) ? Q? (t, p) = V ? (t, p) by the induction hypothesis.
Note the requirement that d ? 1/2 in the previous lemma. If this requirement is not met, then if pt
is such that d < min(pt , 1 ? pt ) then we may prefer to get timed out rather than choose ? = 0 or
? = 1 and suffer the expected penalty of min(pt , 1 ? pt ) for choosing incorrectly. In this situation,
since the conditional probability P{D = t + 1 | D > t} that we will time out in the next time period
grows as time moves forward, the continuation region may expand with time rather than contract.
Under most circumstances, however, it seems reasonable to assume the deadline cost to be at least
as large as that of making an error.
We now state Lemma 3, which shows that the cost of delaying by one time period is as least as large
as the continuation cost c, but may be larger because the delay causes the deadline to approach more
rapidly.
Lemma 3. For each t ? N and p ? (0, 1), Q(t ? 1, pt?1 = p) ? Q(t, pt = p) ? c.
Proof. Fix t. Let ? ? , inf{s ? t + 1 : ps ?
/ C s } so that min(? ? , D) attains the infimum for
t
?
s
s+1
Q(t, p ). Also define ? , inf{s ? t : p ?
/C
} and ? ? , min(D, ? ? ). Since ? ? is within the set
over which the infimum defining Q(t ? 1, p) is taken,
?
?
Q(t ? 1, p) ? hmin(p? , 1 ? p? )1{? ? <D} + c? ? + d1{? ? ?D} | D > t ? 1, pt?1 = piD,xt
?
?
= hmin(p? , 1 ? p? )1{?? <D} + c min(D, ? ? ) + d1{?? ?D} | D > t ? 1, pt?1 = piD,xt
?
?
= hmin(p? , 1?p? )1{???1<D} + c min(D, ? ? ?1) + d1{???1?D} | D > t?1, pt = piD,xt+1 ,
where the last step is justified by the stationarity of the observation process, which implies that the
?
joint distribution of (ps )s?t , p? , and ? ? conditioned on pt = p is the same as the joint distribution
4
?
of (ps?1 )s?t , p? , and ? ? + 1 conditioned on pt?1 = p. Let D? = D + 1 and we have
?
?
Q? (t, p) = hmin(p? , 1?p? )1{??<D? } + c min(D? , ? ? ) + d1{???D? } | D? > t, pt = piD? ,xt+1 ,
so Q(t ? 1, p) ? Q? (t, p) ? c. Finally, as D? satisfies the requirements of Lemma 2, Q? (t, p) ?
Q(t, p).
Lemma 4. For t ? N, Q(t, 0) = Q(t, 1) = c(t + 1) + dP{D = t + 1 | D > t}.
Proof. On the event pt = 0, we have that P{? = 0} = 1 and the policy attaining the infimum in (3) is
? ? = t+1, ? ? = 0. Thus, Q(t, 0) becomes
Q(t, 0) = hl(? ? , ? ? ; ?, D) | D > t, pt = 0iD,xt+1 = hl(? ? , ? ? ; ?, D) | D > t, ? = 0iD,xt+1
= hd1{t+1?D} + c(t + 1) | D > t, ? = 0iD,xt+1 = c(t+1) + dP{D = t+1 | D > t}.
Similarly, on the event pt = 1, we have that P{? = 1} = 1 and the policy attaining the infimum in (3)
is ? ? = t+1, ? ? = 1. Thus, Q(t, 1) = c(t+1) + dP{D ? t + 1 | D > t}.
We are now ready for the main theorem, which shows that C t is either empty or an interval, and
that C t+1 ? C t . To illustrate our proof technique, we plot Q(t, p), Q(t, p), and Q(t + 1, p) ? c as
functions of p in Figure 2.1. As noted, the continuation region C t is the set of p such that Q(t, p) ?
Q(t, p), To show that C t is either empty or an interval, we note that Q(t, p) is a concave function
in p (Lemma 1) whose value at the endpoints p = 0, 1 are greater than the corresponding values of
Q(t, p) (Lemma 4). Such a concave function may only intersect Q(t, p), which is a constant plus
min(p, 1 ? p), either twice or not at all. When it intersects twice, we have the situation pictured in
Figure 2.1, in which C t is a non-empty interval, and when it does not intersect C t is empty.
To show that C t+1 ? C t we note that the difference between Q(t + 1, p) and Q(t, p) is the constant
c. Thus, to show that C t , the set where Q(t, p) contains Q(t, p), is larger than C t+1 , the set where
Q(t + 1, p) is larger than Q(t + 1, p), it is enough to show that the difference between Q(t + 1, p)
and Q(t, p) is at least as large as the adjustment c, which we have done in Lemma 3.
Theorem. At each time t ? N, the optimal continuation region C t is either empty or a closed
interval, and C t+1 ? C t .
Proof. Fix t ? N. We begin by showing that C t+1 ? C t . If C t+1 is empty then the statement
follows trivially, so consider the case when C t+1 6= ?. Choose p ? C t+1 . Then
Q(t, p) ? Q(t + 1, p) ? c ? Q(t + 1, p) ? c = min{p, 1 ? p} + ct = Q(t, p).
Thus, p ? C t , implying C t+1 ? C t .
Now suppose that C t is non-empty and we will show it must be a closed interval. Let at , inf C t
and bt , sup C t . Since C t is a non-empty subset of [0, 1], we have at , bt ? [0, 1]. Furthermore,
at > 0 because Q(t, p) ? c(t + 1) + dP{D = t + 1 | D > t} > ct = Q(t, 0) for all p, and
the continuity of Q(t, ?) implies that Q(t, p) > Q(t, p) > 0 for p in some open interval around 0.
Similarly, bt < 1. Thus, at , bt ? (0, 1).
We will show first that [at , 1/2] ? C t . If at > 1/2 then this is trivially true, so consider the case
that at ? 1/2. Since Q(t, ?) is concave on the open interval (0, 1), it must also be continuous
there. This and the continuity of Q imply that Q(t, at ) = Q(t, at ). Also, Q(t, 0) > Q(t, 0) by
Lemma 4. Thus at > 0 and we may take a left-derivative at at . For any ? ? (0, at ), at ? ? ?
/ C t so
t
t
t
t
Q(a ? ?) > Q(a ? ?). This implies together with Q(t, a ) = Q(t, a ) that
??
Q(t, at ) ? Q(t, at ? ?)
Q(t, at ) ? Q(t, at ? ?)
??
Q(t, at ) = lim
? lim
=
Q(t, at ).
?p
?
?
?p
??0+
??0+
Since Q(t, ?) is concave by Lemma 1 and Q(t, ?) is linear on [0, 1/2], we have for any p? ? [at , 1/2],
??
??
??
??
Q(t, p? ) ?
Q(t, at ) ?
Q(t, at ) =
Q(t, p? ).
?p
?p
?p
?p
Since Q(t, ?) is concave, it is differentiable except at countably many points, so for any p ? [at , 1/2],
Z p ?
Z p ?
?
?
Q(t, p) = Q(t, at ) +
Q(t, p? ) dp? ? Q(t, at ) +
Q(t, p? ) dp? = Q(t, p).
?p
?p
t
t
a
a
5
Therefore p ? C t , and, more generally, [at , 1/2] ? C t . By a similar argument, [1/2, bt] ? C t .
Finally, C t ? [at , bt ] ? [at , 1/2] ? [1/2, bt ] ? C t and we must have C t = [at , bt ].
We also include the following proposition, which shows that if D is finite with probability 1 then
the continuation region must eventually narrow to nothing.
Proposition. If P{D < ?} = 1 then there exists a T < ? such that C T = ?.
Proof. First consider the case when D is bounded, so P{D ? T + 1} = 1 for some time T < ?.
Then, Q(T, pT ) = d + c(T + 1), while Q(T, pT ) = cT + min(pT , 1 ? pT ) ? cT + 1/2. Thus
Q(T, pT ) ? Q(T, pT ) ? d + c ? 1/2 > 0, and C T = ?.
Now consider the case when P{D > t} > 0 for every t. By neglecting the error probability and
including only continuation and deadline costs, we obtain Q(t, pt ) ? d P{D = t+1 | D > t}+c(t+1).
Bounding the error probability by 1/2 we obtain Q(t, pt ) ? ct + 1/2. Thus, Q(t, pt ) ? Q(t, pt ) ?
c + d P{D = t + 1 | D > t} ? 1/2. Since P{D < ?} = 1, limt?? c + d P{D = t+ 1 | D >
t} ? 1/2 = c + d ? 1/2 > 0, and there exists a T such that c + d P{D = t+1 | D > t} ? 1/2 > 0 for
every t ? T . This implies that, for t ? T and pt ? [0, 1], Q(t, pt ) ? Q(t, pt ) > 0 and C t = ?.
3 Computational simulations
We conducted a series of simulations in which we computed the continuation region and distributions of response time and accuracy for the optimal policy for several choices of the parameters c and
d, and for the distribution of the deadline D. We chose the observation xt to be a Bernoulli random
variable under both f0 and f1 for every t = 1, 2, . . . with different values for q? , P{xi = 1 | ?}. In
our simulations we chose q0 = .45 and q1 = .55.
We computed optimal policies for two different forms of deadline distribution: first for a deterministic deadline fixed to some known constant; and second for a gamma distributed deadline. The
gamma distribution with parameters k > 0 and ? > 0 has density (? k /?(k))xk?1 e??x for x > 0,
where ?(?) is the gamma function. The parameters k and ?, called the shape and rate parameters
respectively, are completely determined by choosing the mean and the standard deviation of the distribution since the gamma distribution has mean k/? and variance k/? 2 . A fixed deadline T may
actually be seen as a limiting case of a gamma-distributed deadline by taking both k and ? to infinity
such that k/? = T is fixed.
We used the table-look-up form of the backward dynamic programming algorithm (see, e.g., [11])
to compute the optimal Q-factors. We obtained approximations of the value function and Q-factors
at a finite set of equally spaced discrete points {0, 1/N, . . . , (N ? 1)/N, 1} in the interval [0, 1]. In
our simulations we chose N = 999. We establish a final time T that is large enough that P{D ? T }
is nearly 1, and thus P{? ? ? T } is also nearly 1. In our simulations we chose T = 60. We
approximated the value function V (T, pT ) at this final time by Q(T, pT ). Then we calculated value
functions and Q-factors for previous times recursively according to Bellman?s equation:
Q(t, p) = hV (t + 1, pt+1 ) | pt = pipt+1 ;
V (t, p) = min(Q(t, p), Q(t, p)).
This expectation relating Q(t, ?) to V (t + 1, ?) may be written explicitly using our hypotheses and
Eq. 1 to define a function g so that pt+1 = g(pt , xt+1 ). In our case this function is defined by
g(pt , 1) , (pt q1 )/(pt q1 + (1 ? pt)q0 ) and g(pt , 0) , (pt (1 ? q1 ))/(pt (1 ? q1 ) + (1 ? pt)(1 ? q0 )).
Then we note that P{xt+1 = 1 | pt } = P{xt+1 = 1 | ? = 1}pt + P{xt+1 = 1 | ? = 0}(1 ? pt ) =
pt q1 + (1 ? pt )q0 , and similarly P{xt+1 = 0 | pt } = pt (1 ? q1 ) + (1 ? pt )(1 ? q0 ). Then
Q(t, pt ) = (c(t+1)+d)P{D ? t+1 | D > t} + P{D > t+1 | D > t} [
V t+1, g(pt, 1) pt q1 +(1 ? pt )q0 +V t+1, g(pt, 0) pt (1 ? q1 )+(1 ? pt )(1 ? q0 ) .
We computed continuation regions C t from these Q-factors, and then used Monte Carlo simulation
with 106 samples for each problem setting to estimate P{? = ? | ? = t} and P{? = t} as functions
of t. The results of these computational simulations are shown in Figure 3. We see in Fig. 3A that
the decision boundaries for a fixed deadline (solid blue) are smoothly narrowing toward the midline.
Clearly, at the last opportunity for responding before the deadline, the optimal policy would always
generate a response (and therefore the thresholds merge), since we assumed that the cost of penalty
6
A
B
Probability
Probability
Varying std(D)
1
Varying mean(D)
0.8
0.6
0.4
<D>=40
<D>=30
<D>=25
0.2
0
0
D
Varying c
1
0.8
0.6
0.4
Probability
Probability
C
c=0.001
c=0.002
c=0.004
0.2
0
0
10
20
30
40
Varying d
1
0.8
0.6
0.4
d=0.5
d=2
d=1000
0.2
10
20
30
0
0
40
Time
10
20
30
40
Time
Figure 2: Plots of the continuation region C t (blue), and the probability of a correct response P{? =
? | ? = t} (red). The default settings were c = .001, d = 2, mean(D) = 40, std(D) = 1, and
q0 = 1?q1 = .45. In each plot we varied one of them while keeping the others fixed. In (A) we varied
the standard deviation of D, in (B) the mean of D, in (C) the value of c, and in (D) the value of d.
is greater than the expected cost of making an error: d ? .5 (since the optimal policy is to choose the
hypothesis with probability ? .5, the expected probability of error is always ? .5). At the time step
before, the optimal policy would only continue if one more data point is going to improve the belief
state enough to offset the extra time cost c. Therefore, the optimal policy only continues for a small
?window? around .5 even though it has the opportunity to observe one more data point. At earlier
times, the window ?widens? following similar logic. When uncertainty about the deadline increases
(larger std(D); shown in dashed and dash-dotted blue lines), the optimal thresholds are squeezed
toward each other and to the left, the intuition being that the threat of encountering the deadline
spreads earlier and earlier into the trial. The red lines denote the average accuracy for different
stopping times obtained from a million Monte Carlo simulations of the observation-decision process.
They closely follow the decision thresholds (since the threshold is on the posterior probability p? ),
but are slightly larger, because p? must exceed the threshold, and pt moves in discrete increments
due to the discrete Bernoulli process.
The effect of decreasing the mean deadline is to shift the decision boundaries left-ward, as shown in
Fig. 3B. The effect of increasing the cost of time c is to squeeze the boundaries toward the midline
(Fig. 3C ? this result is analogous to that seen in the classical sequential probability ratio test for the
infinite-horizon case. The effect of increasing d is to squeeze the thresholds to the left (Fig. 3D),
and the rate of shifting is on the order of log(d) because the tail of the gamma distribution is falling
off nearly exponentially.
4 Discussion
In this work, we formalized the problem of sequential hypothesis testing (of two alternatives) under
the pressure of a stochastically sampled deadline, and characterized the optimal policy. For a large
class of deadline distributions (including gamma, normal, exponential, delta), we showed that the
optimal policy is to report a hypothesis as soon as the posterior belief hits one of a pair of monotonically declining thresholds (toward the midline). This generalizes the classical infinite horizon
case in the limit when the deadline goes to infinity, and the optimal policy reverts to a pair of fixed
thresholds as in the sequential probability ratio test [1]. We showed that the decision policy becomes
more conservative (thresholds pushed outward and to the right) when there?s less uncertainty about
7
the deadline, when the mean of the deadline is larger, when the linear temporal cost is larger, and
when the deadline cost is smaller.
In the theoretical analysis, we assumed that D has the property that P{D > t+u | D > t} is nonincreasing in t for each u ? 0 over the set of t such that P{D > t} > 0. This assumption implies that,
if the deadline has not occurred already, then the likelihood that it will happen soon grows larger
and larger, as time passes. The assumption is violated by multi-modal distributions, for which there
is a large probability the deadline will occur at some early point in time, but if the deadline does not
occur by that point in time then will not occur until some much later time. This assumption is met
by a fixed deadline (std(D)? 0), and also includes the classical infinite-horizon case (D ? ?) as a
special case (and the optimal policy reverts to the sequential probability ratio test). This assumption
is also met by any distribution with a log-concave density because log P{D > t + u | D > t} =
log P{D > t+u} ? log P{D > t} = F (t+u) ? F (t), where F (t) , log P{D > t}. If the density of
D is log-concave, then F is concave ( [9]), and the increment F (t+u)?F (t) is non-increasing in t.
Many common distributions have log-concave densities, including the exponential distribution, the
gamma distribution, the normal distribution, and the uniform distribution on an interval.
We used gamma distributions for the deadline in the numerical stimulations. There are several empirical properties about timing uncertainty in humans and animals that make the gamma distribution
particularly suitable. First, realizations from the gamma distribution are always non-negative, which
is consistent with the assumption that a subject never thinks a deadline has passed before the experiment has started. Second, if we fix the rate parameter ? and vary the shape k, then we obtain a
collection of deadline distributions with different means whose variance and mean are in a fixed ratio, which is consistent with experimental observations [12]. Third, for large values of k the gamma
distribution is approximately normal, which is also consistent with experimental observations [12].
Finally, a gamma distributed random variable with mean ? may be written as the sum of k = ??
independent exponential random variables with mean 1/?, so if the brain were able to construct
an exponential-distributed timer whose mean 1/? were on the order of milliseconds, then it could
construct a very accurate gamma-distributed timer for intervals of several seconds by resetting this
exponential timer k times and responding after the kth alarm. This has interesting ramifications for
how sophisticated timers for relatively long intervals can be constructed from neurons that exhibit
dynamics on the order of milliseconds.
This work makes several interesting empirical predictions. Subjects who have more internal uncertainty, and therefore larger variance in their perceived deadline stochasticity, should respond to
stimuli earlier and with lower accuracy. Similarly, the model makes quantitative predictions about
the subject?s performance when the experimenter explicitly manipulates the mean deadline, and the
relative costs of error, time, and deadline.
Acknowledgments
We thank Jonathan Cohen, Savas Dayanik, Philip Holmes, and Warren Powell for helpful discussions. The first author was supported in part by the Air Force Office of Scientific Research under
grant AFOSR-FA9550-05-1-0121.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
Wald, A & Wolfowitz, J (1948). Ann. Math. Statisti. 19: 326-39.
Luce, R D (1986). Response Times: Their Role in Inferring Elementary Mental Org. Oxford Univ. Press.
Ratcliff, R & Rouder, J N (1998). Psychol. Sci. 9: 347-56.
Bogacz, R et al (2006). Pyschol. Rev. 113: 700-65.
Bertsekas, D P (1995). Dynamic Programming and Optimal Control. Athena Scientific.
Gold, J I & Shadlen, M N (2002). Neuron 36: 299-308.
Mozer et al (2004). Proc. Twenty Sixth Annual Conference of the Cognitive Science Society. 981-86.
Siegmund, D (1985). Sequential Analysis. Springer.
Boyd, S & Vandenberghe, L (2004) Convex Optimization. Cambridge Univ. Press.
Poor, H V (1994). An Introduction to Signal Detection and Estimation. Springer-Verlag.
Powell, W B (2007) Approximate Dynamic Programming: Solving the curses of dimensionality. Wiley.
Rakitin, et al (1998). J. Exp. Psychol. Anim. Behav. Process. 24: 15-33.
8
| 3355 |@word trial:4 version:2 seems:1 open:2 simulation:10 p0:2 q1:10 pressure:3 minus:1 solid:5 recursively:1 contains:3 series:1 past:1 timer:5 current:2 yet:1 must:5 written:2 numerical:4 happen:1 shape:2 plot:3 v:2 implying:1 xk:1 fa9550:1 mental:1 math:1 org:1 constructed:1 incorrect:1 behavioral:2 inside:1 introduce:1 manner:1 expected:9 behavior:1 examine:1 nor:1 multi:1 brain:1 bellman:1 decreasing:1 curse:1 window:3 increasing:3 becomes:2 begin:1 moreover:2 bounded:2 bogacz:1 minimizes:1 finding:2 nj:2 temporal:1 quantitative:1 every:3 ti:1 concave:12 hit:2 control:1 unit:2 grant:1 bertsekas:1 before:6 understood:1 timing:3 limit:2 id:9 oxford:1 meet:1 merge:1 approximately:1 black:4 plus:1 twice:2 chose:4 conversely:1 acknowledgment:1 testing:4 x3:1 powell:2 intersect:3 empirical:2 boyd:1 integrating:1 naturalistic:1 get:1 close:1 context:1 applying:1 optimize:1 equivalent:1 deterministic:2 go:1 convex:1 formulate:1 formalized:1 immediately:1 manipulates:1 rule:2 holmes:1 vandenberghe:1 classic:1 proving:1 siegmund:1 increment:2 analogous:1 limiting:1 pt:137 suppose:1 programming:7 hypothesis:11 approximated:1 particularly:1 continues:2 std:4 observed:4 narrowing:1 role:1 hv:5 region:11 intuition:1 mozer:1 dynamic:11 depend:2 solving:1 upon:1 basis:1 completely:1 easily:1 joint:2 intersects:1 univ:2 forced:1 monte:2 choosing:4 whose:4 larger:12 otherwise:1 ward:1 think:1 itself:1 final:2 sequence:3 differentiable:1 analytical:1 realization:1 rapidly:1 ramification:1 gold:1 squeeze:2 empty:9 requirement:3 p:3 generating:2 illustrate:2 depending:1 eq:3 implies:5 met:3 closely:1 correct:1 stochastic:7 human:2 f1:4 fix:5 proposition:2 elementary:1 around:2 normal:3 exp:1 mapping:1 major:1 vary:1 early:1 perceived:1 estimation:1 proc:1 integrates:1 tool:3 clearly:1 always:4 rather:2 varying:4 office:1 kid:1 frazier:1 bernoulli:2 likelihood:1 ratcliff:1 attains:1 helpful:1 dependent:1 stopping:12 accumulated:1 bt:8 dayanik:1 expand:1 going:1 among:1 denoted:2 animal:2 special:2 construct:2 never:2 sampling:3 yu:1 afc:2 look:1 nearly:3 future:2 report:2 others:1 stimulus:1 gamma:15 midline:3 individual:1 detection:1 stationarity:1 nonincreasing:1 accurate:1 neglecting:1 respective:1 continuing:12 timed:1 theoretical:2 minimal:1 earlier:4 cost:32 deviation:2 subset:1 uniform:1 delay:2 conducted:1 ixt:4 chooses:1 density:7 probabilistic:1 contract:1 off:1 together:1 quickly:1 again:1 choose:8 external:2 stochastically:1 cognitive:1 derivative:1 savas:1 attaining:2 stride:1 sec:2 includes:1 explicitly:2 depends:1 later:1 observer:1 closed:2 analyze:2 sup:1 red:6 bayes:2 parallel:1 minimize:2 air:1 hmin:5 accuracy:4 variance:3 who:1 resetting:1 yield:1 spaced:1 rouder:1 accurately:1 carlo:2 definition:2 sixth:1 against:1 associated:6 proof:8 stop:9 sampled:1 experimenter:1 lim:2 dimensionality:1 sophisticated:1 actually:1 appears:1 delimited:1 attained:1 follow:1 response:5 modal:1 done:1 though:1 furthermore:2 implicit:1 until:2 replacing:1 continuity:2 infimum:10 scientific:2 grows:2 effect:3 normalized:1 true:2 hence:1 q0:8 iteratively:2 illustrated:1 noted:1 common:1 stimulation:1 cohen:1 endpoint:1 exponentially:1 million:1 tail:1 occurred:2 relating:1 accumulate:2 cambridge:1 declining:3 trivially:2 similarly:5 stochasticity:1 f0:3 cortex:1 encountering:1 posterior:3 orfe:1 showed:2 inf:11 verlag:1 inequality:2 continue:4 seen:2 additional:1 greater:2 wolfowitz:1 period:2 monotonically:3 dashed:2 signal:1 exceeds:1 characterized:2 long:1 deadline:56 equally:1 prediction:2 wald:1 essentially:1 expectation:2 circumstance:1 limt:1 justified:1 interval:15 extra:1 pass:1 subject:3 integer:1 backwards:1 exceed:2 enough:4 timesteps:1 restrict:1 decline:1 tradeoff:1 luce:1 shift:1 whether:3 expression:1 passed:1 penalty:3 suffer:1 peter:1 cause:1 behav:1 action:1 generally:1 detailed:1 amount:1 outward:1 continuation:14 specifies:1 generate:1 millisecond:2 dotted:3 neuroscience:1 delta:1 blue:8 write:1 discrete:3 threat:1 threshold:16 falling:1 drawn:2 neither:1 backward:1 sum:1 imposition:1 uncertainty:6 respond:1 reasonable:1 decide:2 decision:27 prefer:2 pushed:1 entirely:1 ct:7 dash:1 annual:1 occur:4 constraint:1 infinity:2 x2:2 speed:1 argument:1 prescribed:1 min:17 optimality:1 relatively:1 according:1 alternate:1 combination:1 poor:1 terminates:1 smaller:2 slightly:1 urgent:1 rev:1 making:7 hl:17 restricted:1 taken:3 pid:4 equation:1 describing:1 eventually:2 fail:1 needed:1 available:1 generalizes:1 rewritten:1 observe:2 alternative:3 statisti:1 denotes:3 angela:1 include:1 responding:2 opportunity:2 widens:1 establish:1 classical:3 society:1 objective:1 move:2 already:2 quantity:2 strategy:2 dependence:2 guessing:1 exhibit:2 dp:6 kth:1 thank:1 sci:1 philip:1 athena:1 toward:5 induction:2 index:1 ratio:5 difficult:2 statement:1 negative:1 policy:34 twenty:1 allowing:1 neuron:3 observation:14 finite:7 parietal:1 incorrectly:1 defining:2 situation:4 variability:2 delaying:1 varied:2 pair:2 required:1 elapsed:1 narrow:2 able:1 below:1 induct:1 reverts:2 including:3 belief:4 shifting:1 event:2 suitable:1 natural:1 force:1 pictured:1 improve:1 imply:1 started:1 ready:1 psychol:2 understanding:1 relative:1 afosr:1 loss:2 squeezed:1 interesting:2 versus:1 incurred:1 consistent:3 shadlen:1 supported:1 last:2 soon:3 keeping:1 formal:1 warren:1 taking:3 distributed:5 boundary:3 calculated:1 default:1 avoids:1 cumulative:1 sensory:1 forward:1 made:6 collection:2 author:1 far:2 approximate:1 countably:1 logic:1 decides:1 incoming:1 summing:1 unnecessary:1 assumed:2 xi:1 continuous:2 table:1 main:3 spread:1 bounding:1 alarm:1 nothing:1 allowed:1 x1:2 fig:5 wiley:1 experienced:1 inferring:1 exceeding:1 explicit:1 exponential:5 perceptual:1 third:1 theorem:4 removing:1 specific:1 xt:39 showing:1 offset:1 decay:1 ajyu:1 evidence:5 exists:2 sequential:10 effectively:1 conditioned:3 horizon:9 smoothly:1 intersection:2 infinitely:1 adjustment:2 springer:2 ch:1 corresponds:1 satisfies:1 conditional:1 viewed:1 goal:2 marked:2 ann:1 infinite:6 except:2 determined:2 lemma:22 conservative:1 total:2 called:1 hd1:1 anim:1 experimental:4 formally:1 internal:4 jonathan:1 violated:1 princeton:6 d1:6 |
2,598 | 3,356 | Efficient Convex Relaxation for
Transductive Support Vector Machine
Zenglin Xu
Dept. of Computer Science & Engineering
The Chinese University of Hong Kong
Shatin, N.T., Hong Kong
zlxu@cse.cuhk.edu.hk
Rong Jin
Dept. of Computer Science & Engineering
Michigan State University
East Lansing, MI, 48824
rongjin@cse.msu.edu
Jianke Zhu
Irwin King
Michael R. Lyu
Dept. of Computer Science & Engineering
The Chinese University of Hong Kong
Shatin, N.T., Hong Kong
{jkzhu,king,lyu}@cse.cuhk.edu.hk
Abstract
We consider the problem of Support Vector Machine transduction, which involves
a combinatorial problem with exponential computational complexity in the number of unlabeled examples. Although several studies are devoted to Transductive
SVM, they suffer either from the high computation complexity or from the solutions of local optimum. To address this problem, we propose solving Transductive SVM via a convex relaxation, which converts the NP-hard problem to a
semi-definite programming. Compared with the other SDP relaxation for Transductive SVM, the proposed algorithm is computationally more efficient with the
number of free parameters reduced from O(n2 ) to O(n) where n is the number of
examples. Empirical study with several benchmark data sets shows the promising
performance of the proposed algorithm in comparison with other state-of-the-art
implementations of Transductive SVM.
1
Introduction
Semi-supervised learning has attracted an increasing amount of research interest recently [3, 15]. An
important semi-supervised learning paradigm is the Transductive Support Vector Machine (TSVM),
which maximizes the margin in the presence of unlabeled data and keeps the boundary traversing
through low density regions, while respecting labels in the input space.
Since TSVM requires solving a combinatorial optimization problem, extensive research efforts have
been devoted to efficiently finding the approximate solution to TSVM. The popular version of TSVM
proposed in [8] uses a label-switching-retraining procedure to speed up the computation. In [5], the
hinge loss in TSVM is replaced by a smooth loss function, and a gradient descent method is used
to find the decision boundary in a region of low density. Chapelle et al. [2] employ an iterative
approach for TSVM. It begins with minimizing an easy convex object function, and then gradually approximates the objective of TSVM with more complicated functions. The solution of the
simple function is used as the initialization for the solution to the complicated function. Other iterative methods, such as deterministic annealing [11] and the concave-convex procedure (CCCP)
method [6], are also employed to solve the optimization problem related to TSVM. The main drawback of the approximation methods listed above is that they are susceptible to local optima, and
therefore are sensitive to the initialization of solutions. To address this problem, in [4], a branch-
Time Comparison
2000
CTSVM
RTSVM
1800
1600
Time (seconds)
1400
1200
1000
800
600
400
200
0
50
100
150
200
Number of Samples
250
300
Figure 1: Computation time of the proposed convex relaxation approach for TSVM (i.e., CTSVM)
and the semi-definite relaxation approach for TSVM (i.e., RTSVM) versus the number of unlabeled
examples. The Course data set is used, and the number of labeled examples is 20.
and-bound search method is developed to find the exact solution. In [14], the authors approximate
TSVM by a semi-definite programming problem, which leads to a relaxation solution to TSVM
(noted as RTSVM), to avoid the solution of local optimum. However, both approaches suffer from
the high computational cost and can only be applied to small sized data sets.
To this end, we present the convex relaxation for Transductive SVM (CTSVM). The key idea of our
method is to approximate the non-convex optimization problem of TSVM by its dual problem. The
advantage of doing so is twofold:
? Unlike the semi-definite relaxation [14] that approximates TSVM by dropping the rank
constraint, the proposed approach approximates TSVM by its dual problem. As the basic
result of convex analysis, the conjugate of conjugate of any function f (x) is the convex envelope of f (x), and therefore provides a tighter convex relaxation for f (x) [7]. Hence, the
proposed approach provides a better convex relaxation than that in [14] for the optimization
problem in TSVM.
? Compared to the semi-definite relaxation TSVM, the proposed algorithm involves fewer
free parameters and therefore significantly improves the efficiency by reducing the worstcase computational complexity from O(n6.5 ) to O(n4.5 ). Figure 1 shows the running time
of both the semi-definite relaxation of TSVM in [14] and the proposed convex relaxation for
TSVM versus increasing number of unlabeled examples. The data set used in this example
is the Course data set (see the experiment section), and the number of labeled examples
is 20. We clearly see that the proposed convex relaxation approach is considerably more
efficient than the semi-definition approach.
The rest of this paper is organized as follows. Section 2 reviews the related work on the semidefinite relaxation for TSVM. Section 3 presents the convex relaxation approach for Transductive
SVM. Section 4 presents the empirical studies that verify the effectiveness of the proposed relaxation
for TSVM. Section 5 sets out the conclusion.
2
Related Work
In this section, we review the key formulae for Transductive SVM, followed by the semi-definite
programming relaxation for TSVM.
Let X = (x1 , . . . , xn ) denote the entire data set, including both the labeled examples and the
unlabeled ones. We assume that the first l examples within X are labeled by y` = (y1` , y2` , . . . , yl` )
where yi` ? {?1, +1} represents the binary class label assigned to xi . We further denote by y =
(y1 , y2 , . . . , yn ) ? {?1, +1}n the binary class labels predicted for all the data points in X . The goal
of TSVM is to estimate y by using both the labeled examples and the unlabeled ones.
Following the framework of maximum margin, TSVM aims to identify the classification model that
will result in the maximum classification margin for both labeled and unlabeled examples, which
amounts to solve the following optimization problem:
n
X
min n
kwk22 + C
?i
w,b,y?{?1,+1} ,?
s. t.
i=1
yi (w> xi ? b) ? 1 ? ?i , ?i ? 0, i = 1, 2, . . . , n
yi = yi` , i = 1, 2, . . . , l,
where C ? 0 is the trade-off parameter between the complexity of function w and the margin errors.
The prediction function can be formulated as f (x) = w > x ? b.
Evidently, the above problem is a non-convex optimization problem due to the product term y i wj in
the constraint. In order to approximate the above problem into a convex programming problem, we
first rewrite the above problem into the following form using the Lagrange Theorem:
1
min n
(e + ? ? ? + ?y)> D(y)K?1 D(y)(e + ? ? ? + ?y) + C? > e (1)
2
?,y?{?1,+1} ,?,?
s. t.
? ? 0, ? ? 0, yi = yi` , i = 1, 2, . . . , l,
where ?, ? and ? are the dual variables. e is the n-dimensional column vector of all ones and K is
the kernel matrix. D(y) represents a diagonal matrix whose diagonal elements form the vector y.
Detailed derivation can be found in [9, 13]. Using the Schur complement, the above formulation can
be further formulated as follows:
min
t
(2)
y?{?1,+1}n ,t,?,?,?
?
?
yy> ? K
e + ? ? ? + ?y
s. t.
?0
(e + ? ? ? + ?y)>
t ? 2C? > e
? ? 0, ? ? 0, yi = yi` , i = 1, 2, . . . , l,
where the operator ? represents the element wise product.
To convert the above problem into a convex optimization problem, the key idea is to replace the
quadratic term yy> by a linear variable. Based on the result that the set Sa = {M = yy> |y ?
{?1, +1}n } is equivalent to the set Sb = {M|Mi,i = 1, rank(M) = 1}, we can approximate the
problem in (2) as follows:
min
t
(3)
M,t,?,?,?
?
?
M?K
e+???
s. t.
?0
(e + ? ? ?)> t ? 2C? > e
? ? 0, ? ? 0,
M ? 0, Mi,i = 1, i = 1, 2, . . . , n,
where Mij = yi` yj` for 1 ? i, j ? l.
Note that the key differences between (2) and (3) are (a) the rank constraint rank(M) = 1 is removed, and (b) the variable ? is set to be zero, which is equivalent to setting b = 0. The above
approximation is often referred to as the Semi-Definite Programming (SDP) relaxation. As revealed by the previous studies [14, 1], the SDP programming problem resulting from the approximation is computationally expensive. More specifically, there are O(n2 ) parameters in the SDP
cone and O(n) linear inequality constraints, which implies a worst-case computational complexity
of O(n6.5 ). To avoid the high computational complexity, we present a different approach for relaxing TSVM into a convex problem. Compared to the SDP relaxation approach, it is advantageous
in that (1) it produces the best convex approximation for TSVM, and (2) it is computationally more
efficient than the previous SDP relaxation.
3
Relaxed Transductive Support Vector Machine
In this section, we follow the work of generalized maximum margin clustering [13] by first studying
the case of hard margin, and then extending it to the case of soft margin.
3.1
Hard Margin TSVM
In the hard margin case, SVM does not penalize the classification error, which corresponds to ? = 0
in (1). The resulting formulism of TSVM becomes
1
min
(e + ? + ?y)> D(y)K?1 D(y)(e + ? + ?y)
(4)
?,y,?
2
s. t.
? ? 0,
yi = yi` , i = 1, 2, . . . , l,
yi2 = 1, i = l + 1, l + 2, . . . , n.
Instead of employing the SDP relaxation as in [14], we follow the work in [13] and introduce a
variable z = D(y)(e + ?) = y ? (e + ?). Given that ? ? 0, the constraints in (4) can be written
as yi` zi ? 1 for the labeled examples, and zi2 ? 1 for all the unlabeled examples. Hence, z can be
used as the prediction function, i.e., f ? = z. Using this new notation, the optimization problem in
(4) can be rewritten as follows:
1
(z + ?e)> K?1 (z + ?e)
(5)
min
z,?
2
s. t.
yi` zi ? 1, i = 1, 2, . . . , l,
zi2 ? 1, i = l + 1, l + 2, . . . , n.
One problem with Transductive SVMs is that it is possible to classify all the unlabeled data to one of
the classes with a very large margin due to the high dimension and few labeled data. This will lead
to poor generalization ability. To solve this problem, we introduce the following balance constraint
to ensure that no class takes all the unlabeled examples:
?? ?
l
n
1X
1 X
zi ?
zi ? ?,
l i=1
n?l
(6)
i=l+1
where ? ? 0 is a constant. Through the above constraint, we aim to ensure that the difference
between the labeled data and the unlabeled data in their class assignment is small.
To simplify the expression, we further define w = (z, ?) ? Rn+1 and P = (In , e) ? Rn?(n+1) .
Then, the problem in (5) becomes:
min
w> P> K?1 Pw
s. t.
yi` wi ? 1, i = 1, 2, . . . , l,
w
(7)
wi2 ? 1, i = l + 1, l + 2, . . . , n,
?? ?
l
n
1 X
1X
wi ?
wi ? ?.
l i=1
n?l
i=l+1
When this problem is solved, the label vector y can be directly determined by the sign of the prediction function, i.e., sign(w). This is because wi = (1 + ?)yi for i = l + 1, . . . , n and ? ? 0.
The following theorem shows that the problem in (7) can be relaxed to a semi-definite programming.
Theorem 1. Given a sample X = {x1 , . . . , xn } and a partial set of the labels y` = (y1` , y2` , . . . , yl` )
where 1 ? l ? n, the variable w that optimizes (7) can be calculated by
1
?1
w = [A ? D(? ? b)] (? ? a ? (? ? ?)c),
(8)
2
where a = (yl , 0n?l , 0) ? Rn+1 , b = (0l , 1n?l , 0) ? Rn+1 , c = ( 1l 1l , ? u1 1n?l , 0) ? Rn+1 ,
A = P> K?1 P, and ? is determined by the following semi-definite programming:
n
X
1
max
? t+
?i ? ?(? + ?)
(9)
?,t,?,?
4
i=1
?
?
A ? D(? ? b)
? ? a ? (? ? ?)c,
s. t.
?0
(? ? a ? (? ? ?)c)>
t
? ? 0, ? ? 0, ?i ? 0, i = 1, 2, . . . , n.
Proof Sketch. We define the Lagrangian of the minimization problem (7) as follows:
min max F(w, ?)
w
?
= w> P> K?1 Pw +
l
X
?i (1 ? yi` wi ) +
i=1
>
n
X
?i (1 ? wi2 )
i=l+1
>
+?(c w ? ?) + ?(?c w ? ?),
where ?i ? 0 for i = 1, . . . , n. It can be derived from the duality that minw max? F(w, ?) =
max? minw F(w, ?).
At the optimum, the derivatives of F with respect to the variable w are derived as below:
?F
= 2 [A ? D(? ? b)] w ? ? ? a + (? ? ?)c = 0,
?w
where A = P> K?1 P. The inverse of A?D(??b) can be computed through adding a regularization
parameter. Therefore, w is able to be calculated by:
1
?1
w = [A ? D(? ? b)] (? ? a ? (? ? ?)c).
2
Thus, the dual form of the problem becomes:
n
max
?
X
1
?1
L(?) = ? (? ? a ? (? ? ?)c)> [A ? D(b ? ?)] (? ? a ? (? ? ?)c) +
?i ? ?(? + ?),
4
i=1
We import a variable t, so that
1
? (? ? a ? (? ? ?)c)> [A ? D(b ? ?)]?1 (? ? a ? (? ? ?)c) ? ?t.
4
According to the Schur Complement, we obtain a semi-definite programming cone, from which the
optimization problem (9) can be formulated. ?
Remark I. The problem in (9) is a convex optimization problem, more specifically, a semi-definite
programming problem, and can be efficiently solved by the interior-point method [10] implemented
in some optimization packages, such as SeDuMi [12]. Besides, our relaxation algorithm has O(n)
parameters in the SDP cone and O(n) linear equality constraints, which involves a worst-case computational complexity of O(n4.5 ). However, in the previous relaxation algorithms [1, 14], there
are approximately O(n2 ) parameters in the SDP cone, which involve a worst-case computational
complexity in the scale of O(n6.5 ). Therefore, our proposed convex relaxation algorithm is more
efficient. In addition, as analyzed in Section 2, the approximation in [1, 14] drops the rank constraint
of the matrix y> y, which does not lead to a tight approximation. On the other hand, our prediction
function f ? implements the conjugate of conjugate of the prediction function f (x), which is the
convex envelope of f (x) [7]. Thus, our proposed convex approximation method provides a tighter
approximation than the previous method.
Remark II. It is interesting to discuss the connection between the solution of the proposed algorithm
and that of harmonic functions. We consider a special case of (8), where ? = 0 (which implies no
bias term in the primal SVM), and there is no balance constraint. Then the solution of (9) can be
expressed as follows:
??1
1 ? ?1
z=
K ? D(? ? (0l , 1n?l ))
(? ? (yl , 0n?l )).
(10)
2
It can be further derived as follows:
?
!?1 ? l
!
n
X
X
z = In ?
?i KIin
?i yi` K(xi , ?) ,
(11)
i=l+1
Iin
i=1
where
is defined as an n ? n matrix with all elements being zero except the i-th diagonal element which is 1, and K(xi , ?) is the i-th column of K. Similar to the solution of the harmonic
function, we first propagate the class labels from the labeled examples to the unlabeled one by term
?
?
Pl
Pn
`
i ?1
.
i=1 ?i yi K(xi , ?), and then adjust the prediction labels by the factor In ?
i=l+1 ?i KIn
The key difference in our solution is that (1) different weights (i.e., ?i ) are assigned to the labeled
examples, and (2) the adjustment factor is different to that in the harmonic function [16].
3.2
Soft Margin TSVM
We extend TSVM to the case of soft margin by considering the following problem:
min
?,y,?,?
s. t.
l
n
X
X
1
(e + ? ? ? + ?y)> D(y)K?1 D(y)(e + ? ? ? + ?y) + C`
?i2 + Cu
?i2
2
i=1
i=l+1
? ? 0, ? ? 0,
yi = yi` , 1 ? i ? l,
yi2 = 1, l + 1 ? i ? n,
where ?i is related to the margin error. Note that we distinguish the labeled examples from the
unlabeled examples by introducing different penalty constants for margin errors, C ` for labeled
examples and Cu for unlabeled examples.
Similarly, we introduce the slack variable z, and then derive the following dual problem:
n
X
1
max
? t+
?i ? ?(? + ?)
?,t,?,?
4
i=1
?
?
A ? D(? ? b)
? ? a ? (? ? ?)c
s. t.
? 0,
(? ? a ? (? ? ?)c)>
t
0 ? ?i ? C` , i = 1, 2, . . . , l,
0 ? ?i ? Cu , i = l + 1, l + 2, . . . , n,
? ? 0, ? ? 0,
which is also a semi-definite programming problem and can be solved similarly.
4
(12)
Experiments
In this section, we report empirical study of the proposed method on several benchmark data sets.
4.1
Data Sets Description
To make evaluations comprehensive, we have collected four UCI data sets and three text data sets
as our experimental testbeds. The UCI data sets include Iono, sonar, Banana, and Breast, which are
widely used in data classification. The WinMac data set consists of the classes, mswindows and
mac, of the Newsgroup20 data set. The IBM data set contains the classes, IBM and non-IBM, of the
Newsgroup20 data set. The course data set is made of the course pages and non-course pages of the
WebKb corpus. For each text data set, we randomly sample the data with the sample size of 60, 300
and 1000, respectively. Each resulted sample is noted by the suffix, ?-s?, ?-m?, or ?-l? depending on
whether the sample size is small, medium or large. Table 1 describes the information of these data
sets, where d represents the data dimensionality, l means the number of labeled data points, and n
denotes the total number of examples.
Table 1: Data sets used in the experiments, where d represents the data dimensionality, l means the
number of labeled data points, and n denotes the total number of examples.
Data set
d
l
n
Data set
d
l
n
Iono
34
20 351 WinMac-m 7511 20 300
Sonar
60
20 208 IBM-m
11960 20 300
Banana
4
20 400 Course-m
1800 20 300
Breast
9
20 300 WinMac-l
7511 50 1000
IBM-s
11960 10 60
IBM-l
11960 50 1000
Course-s 1800 10 60
Course-l
1800 50 1000
4.2
Experimental Protocol
To evaluate the effectiveness of the proposed CTSVM method, we choose the conventional SVM
as our baseline method. In our experiments, we also make comparisons with three state-of-the-art
methods: the SVM-light algorithm [8], the Gradient Decent TSVM (?TSVM) algorithm [5], and
the Concave Convex Procedure (CCCP) [6]. Since the SDP approximation TSVM [14] has very
high time complexity O(n6.5 ), which is difficult to process data sets with hundreds of examples.
Thus, it is only evaluated on the smaller data sets, i.e., ?IBM-s? and ?Course-s?.
The experiment setup is described as follows. For each data set, we conduct 10 trials. In each trial,
the training set contains each class of data, and the remaining data are then used as the unlabeled
(test) data. Moreover, the RBF kernel is used for ?Iono?, ?Sonar? and ?Banana?, and the linear
kernel is used for the other data sets. This is because the linear kernel performs better than the RBF
kernel on these data sets. The kernel width of RBF kernel is chosen by 5-cross validation on the
labeled data. The margin parameter C` is tuned by using the labeled data in all algorithms. Due to
the small number of labeled examples, for CTSVM and CCCP, the margin parameter for unlabeled
data, Cu , is set equal to C` . Other parameters in these algorithms are set to the default values
according to the relevant literatures.
4.3
Experimental Results
Table 2: The classification performance of Transductive SVMs on benchmark data sets.
Data Set
SVM
SVM-light
?TSVM
CCCP
CTSVM
Iono
78.55?4.83 78.25?0.36 81.72?4.50
82.11?3.83 80.09?2.63
Sonar
51.76?5.05 55.26?5.88 69.36?4.69
56.01?6.70 67.39?6.26
Banana
58.45?7.15
71.54?7.28
79.33?4.22 79.51?3.02
Breast
96.46?1.18 95.68?1.82 97.17?0.35
96.89?0.67 97.79?0.23
IBM-s
52.75?15.01 67.60?9.29 65.80?6.56 65.62?14.83 75.25?7.49
Course-s
63.52?5.82 76.82?4.78 75.80?12.87 74.20?11.50 79.75?8.45
WinMac-m 57.64?9.58 79.42?4.60 81.03?8.23
84.28?8.84 84.82?2.12
IBM-m
53.00?6.83 67.55?6.74 64.65?13.38 69.62?11.03 73.17?0.89
Course-m
80.18?1.27 93.89?1.49 90.35?3.59
88.78?2.87 92.92?2.28
WinMac-l
60.86?10.10 89.81?2.10 90.19?2.65
91.00?2.42 91.25?2.67
IBM-l
61.82?7.26 75.40?2.26 73.11?1.99
74.80?1.87 73.42?3.23
Course-l
83.56?3.10 92.35?3.02 93.58?2.68
91.32?4.08 94.62?0.97
Table 2 summarizes the classification accuracy and the standard deviations of the proposed algorithm, the baseline method and the state-of-the-art methods. It can be observed that our proposed
algorithm performs significantly better than the standard SVM across all the data sets. Moreover, on
the small-size data sets, i.e., ?IBM-s? and ?Course-s?, the results of the SDP-relaxation method are
68.57?22.73 and 64.03?7.65, which are worse than the proposed CTSVM method. In addition, the
proposed CTSVM algorithm performs much better than other TSVM methods over ?WinMac-m?
and ?Course-l?. As shown in Table 2, the SVM-light algorithm achieves the best results on ?Coursem? and ?IBM-l?, however, it fails to converge on ?Banana?. On the remaining data sets, comparable
results have been obtained for our proposed algorithm. From above, the empirical evaluations indicate that our proposed CTSVM method achieves promising classification results comparing with
the state-of-the-art methods.
5
Conclusion and Future Work
This paper presents a novel method for Transductive SVM by relaxing the unknown labels to the
continuous variables. In contrast to the previous relaxation method which involves O(n 2 ) free parameters in the semi-definite matrix, our method takes the advantages of reducing the number of
free parameters to O(n), and can solve the optimization problem more efficiently. In addition, the
proposed approach provides a tighter convex relaxation for the optimization problem in TSVM. Empirical studies on benchmark data sets demonstrate that the proposed method is more efficient than
the previous semi-definite relaxation method and achieves promising classification results comparing to the state-of-the-art methods.
As the current model is only designed for a binary-classification, we plan to develop a multi-class
Transductive SVM model in the future. Moreover, it is desirable to extend the current model to
classify the new incoming data.
Acknowledgments
The work described in this paper is supported by a CUHK Internal Grant (No. 2050346) and a grant
from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project
No. CUHK4150/07E).
References
[1] T. D. Bie and N. Cristianini. Convex methods for transduction. In S. Thrun, L. Saul, and
B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press,
Cambridge, MA, 2004.
[2] O. Chapelle, M. Chi, and A. Zien. A continuation method for semi-supervised SVMs. In ICML
?06: Proceedings of the 23rd international conference on Machine learning, pages 185?192,
New York, NY, USA, 2006. ACM Press.
[3] O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-Supervised Learning. MIT Press, Cambridge,
MA, 2006.
[4] O. Chapelle, V. Sindhwani, and S. Keerthi. Branch and bound for semi-supervised support
vector machines. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, Cambridge, MA, 2007.
[5] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, pages
57?64, 2005.
[6] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Large scale transductive SVMs. Journal of
Machine Learning Reseaerch, 7:1687?1712, 2006.
[7] J.-B. Hiriart-Urruty and C. Lemarechal. Convex analysis and minimization algorithms II:
advanced theory and bundle methods. (2nd part edition). Springer-Verlag, New York, 1993.
[8] T. Joachims. Transductive inference for text classification using support vector machines. In
ICML ?99: Proceedings of the Sixteenth International Conference on Machine Learning, pages
200?209, San Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc.
[9] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning the
kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27?
72, 2004.
[10] Y. Nesterov and A. Nemirovsky. Interior point polynomial methods in convex programming:
Theory and applications. Studies in Applied Mathematics. Philadelphia, 1994.
[11] V. Sindhwani, S. S. Keerthi, and O. Chapelle. Deterministic annealing for semi-supervised
kernel machines. In ICML ?06: Proceedings of the 23rd international conference on Machine
learning, pages 841?848, New York, NY, USA, 2006. ACM Press.
[12] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones.
Optimization Methods and Software, 11:625?653, 1999.
[13] H. Valizadegan and R. Jin. Generalized maximum margin clustering and unsupervised kernel
learning. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19. MIT Press, Cambridge, MA, 2007.
[14] L. Xu and D. Schuurmans. Unsupervised and semi-supervised multi-class support vector machines. In AAAI, pages 904?910, 2005.
[15] X. Zhu. Semi-supervised learning literature survey. Technical report, Computer Sciences,
University of Wisconsin-Madison, 2005.
[16] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields
and harmonic functions. In Proceedings of Twentith International Conference on Machine
Learning (ICML-2003), pages 912?919, 2003.
| 3356 |@word kong:5 cu:4 version:1 pw:2 trial:2 advantageous:1 polynomial:1 retraining:1 nd:1 propagate:1 nemirovsky:1 contains:2 tuned:1 current:2 comparing:2 bie:1 attracted:1 written:1 import:1 drop:1 designed:1 intelligence:1 fewer:1 provides:4 cse:3 consists:1 introduce:3 lansing:1 valizadegan:1 zlxu:1 sdp:11 multi:2 chi:1 considering:1 increasing:2 becomes:3 begin:1 webkb:1 notation:1 moreover:3 maximizes:1 medium:1 project:1 developed:1 finding:1 sinz:1 concave:2 platt:2 grant:3 yn:1 engineering:3 local:3 switching:1 approximately:1 initialization:2 china:1 relaxing:2 acknowledgment:1 yj:1 definite:15 implement:1 procedure:3 empirical:5 significantly:2 unlabeled:16 interior:2 operator:1 equivalent:2 deterministic:2 lagrangian:1 conventional:1 convex:28 survey:1 exact:1 programming:13 us:1 lanckriet:1 element:4 expensive:1 labeled:18 observed:1 solved:3 worst:3 region:3 wj:1 trade:1 removed:1 complexity:9 respecting:1 cristianini:2 nesterov:1 solving:2 rewrite:1 tight:1 efficiency:1 derivation:1 artificial:1 whose:1 widely:1 solve:4 ability:1 statistic:1 transductive:16 advantage:2 evidently:1 propose:1 hiriart:1 formulism:1 product:2 uci:2 relevant:1 sixteenth:1 description:1 olkopf:4 optimum:4 extending:1 produce:1 object:1 derive:1 depending:1 develop:1 sa:1 implemented:1 predicted:1 involves:4 implies:2 indicate:1 drawback:1 generalization:1 tighter:3 rong:1 pl:1 lyu:2 achieves:3 combinatorial:2 label:9 sensitive:1 council:1 hoffman:2 minimization:2 mit:4 clearly:1 gaussian:1 aim:2 avoid:2 pn:1 derived:3 joachim:1 rank:5 hk:2 contrast:1 baseline:2 inference:1 suffix:1 sb:1 entire:1 dual:5 classification:11 plan:1 art:5 special:2 equal:1 field:1 testbeds:1 represents:5 icml:4 unsupervised:2 future:2 np:1 report:2 simplify:1 employ:1 few:1 randomly:1 resulted:1 comprehensive:1 replaced:1 iono:4 keerthi:2 interest:1 evaluation:2 adjust:1 analyzed:1 semidefinite:2 light:3 primal:1 devoted:2 bundle:1 partial:1 minw:2 sedumi:2 traversing:1 conduct:1 column:2 soft:3 classify:2 assignment:1 cost:1 introducing:1 mac:1 deviation:1 hundred:1 zenglin:1 considerably:1 density:3 international:5 yl:4 off:1 michael:1 aaai:1 choose:1 worse:1 derivative:1 inc:1 collobert:1 doing:1 tsvm:36 complicated:2 accuracy:1 kaufmann:1 efficiently:3 identify:1 definition:1 proof:1 mi:3 popular:1 improves:1 dimensionality:2 organized:1 supervised:10 follow:2 formulation:1 evaluated:1 sketch:1 hand:1 sturm:1 usa:3 verify:1 y2:3 hence:2 regularization:1 assigned:2 equality:1 symmetric:1 i2:2 width:1 noted:2 hong:5 generalized:2 demonstrate:1 performs:3 iin:1 wise:1 harmonic:4 novel:1 recently:1 extend:2 approximates:3 cambridge:4 rd:2 mathematics:1 similarly:2 chapelle:6 optimizes:1 verlag:1 inequality:1 binary:3 yi:20 morgan:1 relaxed:2 employed:1 converge:1 paradigm:1 cuhk:3 semi:26 zien:3 branch:2 desirable:1 ii:2 jianke:1 smooth:1 technical:1 cross:1 dept:3 cccp:4 prediction:6 basic:1 breast:3 kernel:10 penalize:1 addition:3 annealing:2 publisher:1 sch:4 envelope:2 rest:1 unlike:1 kwk22:1 lafferty:1 effectiveness:2 schur:2 jordan:1 presence:1 revealed:1 easy:1 decent:1 zi:4 idea:2 shatin:2 whether:1 expression:1 bartlett:1 effort:1 penalty:1 suffer:2 york:3 remark:2 matlab:1 detailed:1 listed:1 involve:1 amount:2 svms:4 reduced:1 continuation:1 sign:2 yy:3 dropping:1 key:5 four:1 tenth:1 relaxation:29 convert:2 cone:5 mswindows:1 inverse:1 package:1 separation:1 decision:1 summarizes:1 comparable:1 bound:2 followed:1 distinguish:1 quadratic:1 constraint:10 software:1 u1:1 speed:1 min:9 according:2 poor:1 conjugate:4 describes:1 smaller:1 across:1 wi:5 n4:2 gradually:1 ghaoui:1 computationally:3 discus:1 slack:1 urruty:1 end:1 studying:1 rewritten:1 zi2:2 denotes:2 running:1 clustering:2 ensure:2 include:1 remaining:2 hinge:1 madison:1 ghahramani:1 chinese:2 objective:1 diagonal:3 gradient:2 thrun:1 collected:1 besides:1 minimizing:1 balance:2 difficult:1 susceptible:1 setup:1 implementation:1 unknown:1 benchmark:4 jin:2 descent:1 banana:5 y1:3 rn:5 complement:2 toolbox:1 extensive:1 connection:1 lemarechal:1 address:2 able:1 below:1 wi2:2 including:1 max:6 advanced:1 zhu:3 n6:4 philadelphia:1 text:3 review:2 literature:2 wisconsin:1 loss:2 interesting:1 versus:2 validation:1 editor:3 ibm:12 course:14 supported:1 free:4 bias:1 saul:1 boundary:2 dimension:1 xn:2 calculated:2 default:1 author:1 made:1 san:1 employing:1 approximate:5 keep:1 incoming:1 corpus:1 francisco:1 xi:5 msu:1 iterative:2 search:1 continuous:1 sonar:4 table:5 promising:3 ca:1 rongjin:1 schuurmans:1 bottou:1 protocol:1 main:1 yi2:2 edition:1 n2:3 xu:2 x1:2 referred:1 transduction:2 ny:2 fails:1 exponential:1 administrative:1 kin:1 formula:1 theorem:3 svm:17 workshop:1 adding:1 margin:17 michigan:1 lagrange:1 expressed:1 adjustment:1 sindhwani:2 springer:1 mij:1 corresponds:1 worstcase:1 acm:2 ma:4 weston:1 sized:1 goal:1 king:2 formulated:3 rbf:3 twofold:1 replace:1 hard:4 specifically:2 determined:2 reducing:2 except:1 total:2 duality:1 experimental:3 east:1 internal:1 support:7 irwin:1 cuhk4150:1 evaluate:1 |
2,599 | 3,357 | A Learning Framework for Nearest Neighbor Search
Sanjoy Dasgupta
Department of Computer Science
University of California, San Diego
dasgupta@cs.ucsd.edu
Lawrence Cayton
Department of Computer Science
University of California, San Diego
lcayton@cs.ucsd.edu
Abstract
Can we leverage learning techniques to build a fast nearest-neighbor (ANN) retrieval data structure? We present a general learning framework for the NN problem in which sample queries are used to learn the parameters of a data structure
that minimize the retrieval time and/or the miss rate. We explore the potential of
this novel framework through two popular NN data structures: KD-trees and the
rectilinear structures employed by locality sensitive hashing. We derive a generalization theory for these data structure classes and present simple learning algorithms for both. Experimental results reveal that learning often improves on the
already strong performance of these data structures.
1
Introduction
Nearest neighbor (NN) searching is a fundamental operation in machine learning, databases, signal
processing, and a variety of other disciplines. We have a database of points X = {x1 , . . . , xn }, and
on an input query q, we hope to return the nearest (or approximately nearest, or k-nearest) point(s)
to q in X using some similarity measure.
A tremendous amount of research has been devoted to designing data structures for fast NN retrieval.
Most of these structures are based on some clever partitioning of the space and a few have bounds
(typically worst-case) on the number of distance calculations necessary to query it.
In this work, we propose a novel approach to building an efficient NN data structure based on
learning. In contrast to the various data structures built using geometric intuitions, this learning
framework allows one to construct a data structure by directly minimizing the cost of querying it.
In our framework, a sample query set guides the construction of the data structure containing the
database. In the absence of a sample query set, the database itself may be used as a reasonable prior.
The problem of building a NN data structure can then be cast as a learning problem:
Learn a data structure that yields efficient retrieval times on the sample queries
and is simple enough to generalize well.
A major benefit of this framework is that one can seamlessly handle situations where the query
distribution is substantially different from the distribution of the database.
We consider two different function classes that have performed well in NN searching: KD-trees
and the cell structures employed by locality sensitive hashing. The known algorithms for these
data structures do not, of course, use learning to choose the parameters. Nevertheless, we can
examine the generalization properties of a data structure learned from one of these classes. We
derive generalization bounds for both of these classes in this paper.
Can the framework be practically applied? We present very simple learning algorithms for both of
these data structure classes that exhibit improved performance over their standard counterparts.
1
2
Related work
There is a voluminous literature on data structures for nearest neighbor search, spanning several
academic communities. Work on efficient NN data structures can be classified according to two
criteria: whether they return exact or approximate answers to queries; and whether they merely
assume the distance function is a metric or make a stronger assumption (usually that the data are
Euclidean). The framework we describe in this paper applies to all these methods, though we focus
in particular on data structures for RD .
Perhaps the most popular data structure for nearest neighbor search in RD is the simple and convenient KD-tree [1], which has enjoyed success in a vast range of applications. Its main downside
is that its performance is widely believed to degrade rapidly with increasing dimension. Variants
of the data structure have been developed to ameliorate this and other problems [2], though highdimensional databases continue to be challenging. One recent line of work suggests randomly projecting points in the database down to a low-dimensional space, and then using KD-trees [3, 4].
Locality sensitive hashing (LSH) has emerged as a promising option for high-dimensional NN search
in RD [5]. It has strong theoretical guarantees for databases of arbitrary dimensionality, though they
are for approximate NN search. We review both KD-trees and LSH in detail later.
For data in metric spaces, there are several schemes based on repeatedly applying the triangle inequality to eliminate portions of the space from consideration; these include Orchard?s algorithm
[6] and AESA [7]. Metric trees [8] and the recently suggested spill trees [3] are based on similar
ideas and are related to KD-trees. A recent trend is to look for data structures that are attuned to the
intrinsic dimension, e.g. [9]. See the excellent survey [10] for more information.
There has been some work on building a data structure for a particular query distribution [11];
this line of work is perhaps most similar to ours. Indeed, we discovered at the time of press that the
algorithm for KD-trees we describe appeared previously in [12]. Nevertheless, the learning theoretic
approach in this paper is novel; the study of NN data structures through the lens of generalization
ability provides a fundamentally different theoretical basis for NN search with important practical
implications.
3
Learning framework
In this section we formalize a learning framework for NN search. This framework is quite general
and will hopefully be of use to algorithmic developments in NN searching beyond those presented
in this paper.
Let X = {x1 , . . . , xn } denote the database and Q the space from which queries are drawn. A
typical example is X ? RD and Q = RD . We take a nearest neighbor data structure to be a
mapping f : Q ? 2X ; the interpretation is we compute distances only to f (q), not all of X. For
example, the structure underlying LSH partitions RD into cells and a query is assigned to the subset
of X that falls into the same cell.
What quantities are we interested in optimizing? We want to only compute distances to a small
fraction of the database on a query; and, in the case of probabilistic algorithms, we want a high
probability of success. More precisely, we hope to minimize the following two quantities for a data
structure f :
? The fraction of X that we need to compute distances to:
sizef (q) ?
|f (q)|
.
n
? The fraction of a query?s k nearest neighbors that are missed:
missf (q) ?
|?k (q) \ f (q)|
k
(?k (q) denotes the k nearest neighbors of q in X).
2
In -approximate NN search, we only require a point x such that d(q, x) ? (1 + )d(q, X), so we
instead use an approximate miss rate:
missf (q) ? 1 [@x ? f (q) such that d(q, x) ? (1 + )d(q, X)] .
None of the previously discussed data structures are built by explicitly minimizing these quantities,
though there are known bounds for some. Why not? One reason is that research has typically
focused on worst-case sizef and missf rates, which require minimizing these functions over all
q ? Q. Q is typically infinite of course.
In this work, we instead focus on average-case sizef and missf rates?i.e. we assume q is a draw
from some unknown distribution D on Q and hope to minimize
Eq?D [sizef (q)]
and Eq?D [missf (q)] .
To do so, we assume that we are given a sample query set Q = {q1 , . . . , qm } drawn iid from D. We
attempt to build f minimizing the empirical size and miss rates, then resort to generalization bounds
to relate these rates to the true ones.
4
Learning algorithms
We propose two learning algorithms in this section. The first is based on a splitting rule for KD-trees
designed to minimize a greedy surrogate for the empirical sizef function. The second is a algorithm
that determines the boundary locations of the cell structure used in LSH that minimize a tradeoff of
the empirical sizef and missf functions.
4.1
KD-trees
KD-trees are a popular cell partitioning scheme for RD based on the binary search paradigm. The
data structure is built by picking a dimension, splitting the database along the median value in that
dimension, and then recursing on both halves.
procedure B UILD T REE(S)
if |S| < MinSize, return leaf.
else:
Pick an axis i.
Let median = median(si : s ? S).
LeftTree = B UILD T REE({s ? S : si ? median}).
RightTree= B UILD T REE({s ? S : si > median}).
return [LeftTree, RightTree, median, i].
To find a NN for a query q, one first computes distances to all points in the same cell, then traverses
up the tree. At each parent node, the minimum distance between q and points already explored is
compared to the distance to the split. If the latter is smaller, then the other child must be explored.
Explore right subtree:
Do not explore:
Typically the cells contain only a few points; a query is expensive because it lies close to many of
the cell boundaries and much of the tree must be explored.
Learning method
Rather than picking the median split at each level, we use the training queries qi to pick a split that
greedily minimizes the expected cost. A split s divides the sample queries (that are in the cell being
split) into three sets: Qtc , those q that are ?too close? to s?i.e. nearer to s than d(q, X); Qr , those
on the right of s but not in Qtc ; and Ql , those on the left of s but not in Qtc . Queries in Qtc will
require exploring both sides of the split. The split also divides the database points (that are in the
cell being split) into Xl and Xr . The cost of split s is then defined to be
cost(s) ? |Ql | ? |Xl | + |Qr | ? |Xr | + |Qtc | ? |X|.
3
P
cost(s) is a greedy surrogate for i sizef (qi ); evaluating the true average size would require a
potentially costly recursion. In contrast, minimizing cost(s) can be done painlessly since it takes on
at most 2m + n possible values and each can be evaluated quickly. Using a sample set led us to a
very simple, natural cost function that can be used to pick splits in a principled manner.
4.2
Locality sensitive hashing
LSH was a tremendous breakthrough in NN search as it led to data structures with provably sublinear
(in the database size) retrieval time for approximate NN searching. More impressive still, the bounds
on retrieval are independent of the dimensionality of the database. We focus on the LSH scheme for
the k ? kp norm (p ? (0, 2]), which we refer to as LSHp . It is built on an extremely simple space
partitioning scheme which we refer to as a rectilinear cell structure (RCS).
procedure B UILD RCS(X ? RD )
Let R ? RO(log n)?d with Rij iid draws from a p-stable distribution.1
Project database down to O(log n) dimensions: xi 7? Rxi .
Uniformly grid the space with B bins per direction.
See figure 3, left panel, for an example. On query q, one simply finds the cell that q belongs to, and
returns the nearest x in that cell.
In general, LSHp requires many RCSs, used in parallel, to achieve a constant probability of success;
in many situations one may suffice [13]. Note that LSHp only works for distances at a single scale
R: the specific guarantee is that LSHp will return a point x ? X within distance (1 + )R of q as
long as d(q, X) < R. To solve the standard approximate NN problem, one must build O(log(n/))
LSHp structures.
Learning method
We apply our learning framework directly to the class of RCSs since they are the core structural
component of LSHp . We consider a slightly wider class of RCSs where the bin widths are allowed
to vary. Doing so potentially allows a single RCS to work at multiple scales if the bin positions are
chosen appropriately. We give a simple procedure that selects the bin boundary locations.
P
We wish to select boundary locations minimizing the cost i missf (qi ) + ?sizef (qi ), where ? is a
tradeoff parameter (alternatively, one could fix a miss rate that is reasonable, say 5%, and minimize
the size). The optimization is performed along one dimension at a time. Fortunately, the optimal
binning along a dimension can be found by dynamic programming. There are at most m+n possible
boundary locations; order them from left to right. The cost of placing the boundaries at p1 , p2 , pB+1
can be decomposed as c[p1 , p2 ] + ? ? ? + c[pB , pB+1 ], where
X
X
c[pi , pi+1 ] =
missf (q) + ?
|{x ? [pi , pi+1 ]}| .
q?[pi ,pi+1 ]
q?[pi ,pi+1 ]
Let D be our dynamic programming table where D[p, i] is defined as the cost of putting the ith
boundary at position p and the remaining B + 1 ? i to the right. Then D[p, i] = minp0 ?p c[p, p0 ] +
D[p0 , i ? 1].
Generalization theory2
5
In our framework, a nearest neighbor data structure is learned by specifically designing it to perform well on a set of sample queries. Under what conditions will this search structure have good
performance on future queries?
Recall the setting: there is a database X = {x1 , . . . , xn }, sample queries Q = {q1 , . . . , qm } drawn
iid from some distribution D on Q, and we wish to learn a data structure f : Q ? 2X drawn from a
d
Dp is p-stable if for any v ? Rd and Z, X1 , . . . , Xd drawn iid from Dp , hv, Xi = kvkp Z. For example,
N (0, 1) is 2-stable.
2
See the full version of this paper for any missing proofs.
1
4
function class F. We are interested in the generalization of sizef (q) ? |f (q)|
and missf (q) ?
n ,
|?k (q)\f (q)|
, both of which have range [0, 1] (missf (q) can be substituted for missf (q) throughout
k
this section).
Suppose a data structure f is chosen from some class F, so as to have low empirical cost
m
m
1 X
1 X
sizef (qi ) and
missf (qi ).
m i=1
m i=1
Can we then conclude that data structure f will continue to perform well for subsequent queries
drawn from the underlying distribution on Q? In other words, are the empirical estimates above
necessarily close to the true expected values Eq?D sizef (q) and Eq?D missf (q) ?
There is a wide range of uniform convergence results which relate the difference between empirical
and true expectations to the number of samples seen (in our case, m) and some measure of the
complexity of the two classes {sizef : f ? F} and {missf : f ? F}. The following is particularly
convenient to use, and is well-known [14, theorem 3.2].
Theorem 1. Let G be a set of functions from a set Z to [0, 1]. Suppose a sample z1 , . . . , zm is drawn
from some underlying distribution on Z. Let Gm denote the restriction of G to these samples, that is,
Gm = {(g(z1 ), g(z2 ), . . . , g(zm )) : g ? G}.
Then for any ? > 0, the following holds with probability at least 1 ? ?:
r
r
m
1 X
2 log |Gm |
log(2/?)
sup Eg ?
g(zi ) ? 2
+
.
m i=1
m
m
g?G
This can be applied immediately to the kind of data structure used by LSH.
Definition 2. A (u1 , . . . , ud , B)-rectilinear cell structure (RCS) in RD is a partition of RD into B d
cells given by
x 7? (h1 (x ? u1 ), . . . , hd (x ? ud )),
where each hi : R ? {1, . . . , B} is a partition of the real line into B intervals.
Theorem 3. Fix any vectors u1 , . . . , ud ? RD , and, for some positive integer B, let the set of data
structures F consist of all (u1 , . . . , ud , B)-rectilinear cell structures in RD . Fix any database of n
points X ? RD . Suppose there is an underlying distribution over queries in RD , from which m
sample queries q1 , . . . , qm are drawn. Then
r
r
m
1 X
2d(B ? 1) log(m + n)
log(2/?)
missf (qi ) ? 2
sup E[missf ] ?
+
m
m
m
f ?F
i=1
and likewise for sizef .
Proof. Fix any X = {x1 , . . . , xn } and any q1 , . . . , qm . In how many ways can these points be
assigned to cells by the class of all (u1 , . . . , ud , B)-rectilinear data structures? Along each axis ui
there are B ? 1 boundaries to be chosen and only m + n distinct locations for each of these (as far
as partitioning of the xi ?s and qi ?s is concerned). Therefore there are at most (m + n)d(B?1) ways to
carve up the points. Thus the functions {missf : f ? F} (or likewise, {sizef : f ? F}) collapse to
a set of size just (m + n)d(B?1) when restricted to m queries; the rest follows from theorem 1.
This is good generalization performance because it depends only on the projected dimension, not
the original dimension. It holds when the projection directions u1 , . . . , ud are chosen randomly, but,
more remarkably, even if they are chosen based on X (for instance, by running PCA on X). If we
learn the projections as well (instead of using random ones) the bound degrades substantially.
Theorem 4. Consider the same setting as Theorem 3, except that now F ranges over
(u1 , . . . , ud , B)-rectilinear cell structures for all choices of u1 , . . . , ud ? RD . Then with probability at least 1 ? ?,
r
r
m
1 X
2 + 2d(D + B ? 2) log(m + n)
log(2/?)
sup E[missf ] ?
missf (qi ) ? 2
+
m i=1
m
m
f ?F
and likewise for sizef .
5
ann2fig dumpSTD
ann2fig dumpL
Figure 1: Left: Outer ring is the database; inner cluster of points are the queries. Center: KD-tree
with standard median splits. Right: KD-tree with learned splits.
KD-trees are slightly different than RCSs: the directions ui are simply the coordinate axes, and the
number of partitions per direction varies (e.g. one direction may have 10 partitions, another only 1).
Theorem 5. Let F be the set of all depth ? KD-trees in RD and X ? RD be a database of points.
Suppose there is an underlying distribution over queries in RD from which q1 , . . . qm are drawn.
Then with probability at least 1 ? ?,
r
r
m
1 X
(2?+1 ? 2) log (D(3m + n))
log (2/?)
sup E[missf ] ?
missf (qi ) ? 2
+
m
m
m
f ?F
i=1
A KD-tree utilizing median splits has depth ? ? log n. The depth of a KD-tree with learned splits
can be higher, though we found empirically that the depth was always much less than 2 log n (and
can of course be restricted manually). KD-trees require significantly more samples than RCSs to
generalize; the class of KD-trees is much more complex than that of RCSs.
Experiments3
6
6.1
KD-trees
First let us look at a simple example comparing the learned splits to median splits. Figure 1 shows
a 2-dimensional dataset and the cell partitions produced by the learned splits and the median splits.
The KD-tree constructed with the median splitting rule places nearly all of the boundaries running
right through the queries. As a result, nearly the entire database will have to be searched for queries
drawn from the center cluster distribution. The KD-tree with the learned splits places most of the
boundaries right around the actual database points, ensuring that fewer leaves will need to be examined for each query.
We now show results on several datasets from the UCI repository and 2004 KDD cup competition.
We restrict attention to relatively low-dimensional datasets (D < 100) since that is the domain
in which KD-trees are typically applied. These experiments were all conducted using a modified
version of Mount and Arya?s excellent KD-tree software [15]. For this set of experiments, we used
a randomly selected subset of the dataset as the database and a separate small subset as the test
queries. For the sample queries, we used the database itself?i.e. no additional data was used to
build the learned KD-tree.
The following table shows the results. We compare performance in terms of the average number of
database points we have to compute distances to on a test set.
data set
DB size
test pts
dim
Corel (UCI)
Covertype (UCI)
Letter (UCI)
Pen digits (UCI)
Bio (KDD)
Physics (KDD)
32k
100k
18k
9k
100k
100k
5k
10k
2k
1k
10k
10k
32
55
16
16
74
78
# distance calculations
median split learned split
1035.7
403.7
20.8
18.4
470.1
353.8
168.9
114.9
1409.8
1310.8
1676.6
404.0
%
improvement
61.0
11.4
27.4
31.9
7.0
75.9
The learned method outperforms the standard method on all of the datasets, showing a very large improvement on several of them. Note also that even the standard method exhibits good performance,
3
Additional experiments appear in the full version of this paper.
6
standard KD-tree
Bears
learned KD-tree
N. American animals
All animals
Everything
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0
0
.1
.2
.3
0
.1
.2
.3
0
.1
.2
.3
0
0
.1
.2
.3
Figure 2: Percentage of DB examined as a function of (the approximation factor) for various query
distributions.
Random
boundaries
Random
boundaries
Tuned
boundaries
Tuned
boundaries
Figure 3: Example RCSs. Left: Standard RCS. Right: Learned RCS
often requiring distance calculations to less than one percent of the database. We are showing strong
improvements on what are already quite good results.
We additionally experimented with the ?Corel50? image dataset. It is divided into 50 classes (e.g.
air shows, bears, tigers, Fiji) containing 100 images each. We used the 371-dimensional ?semantic
space? representation of the images recently developed in a series of image retrieval papers (see e.g.
[16]). This dataset allows us to explore the effect of differing query and database distributions in
a natural setting. It also demonstrates that KD-trees with learned parameters can perform well on
high-dimensional data.
Figure 2 shows the results of running KD-trees using median and learned splits. In each case, 4000
images were chosen for the database (from across all the classes) and images from select classes
were chosen for the queries. The ?All? queries were chosen from all classes; the ?Animals? were
chosen from the 11 animal classes; the ?N. American animals? were chosen from 5 of the animal
classes; and the ?Bears? were chosen from the two bear classes. Standard KD-trees are performing
somewhat better than brute force in these experiments; the learned KD-trees yield much faster retrieval times across a range of approximation errors. Note also that the performance of the learned
KD-tree seems to improve as the query distribution becomes simpler whereas the performance for
the standard KD-tree actually degrades.
6.2
RCS/LSH
Figure 3 shows a sample run of the learning algorithm. The queries and DB are drawn from the
same distribution. The learning algorithm adjusts the bin boundaries to the regions of density.
Experimenting with RCS structures is somewhat challenging since there are two parameters to set
(number of projections and boundaries), an approximation factor , and two quantities to compare
(size and miss). We swept over the two parameters to get results for the standard RCSs. Results for
learned RCSs were obtained using only a single (essentially unoptimized) parameter setting. Rather
than minimizing a tradeoff between sizef and missf , we constrained the miss rate and optimized the
sizef . The constraint was varied between runs (2%, 4%, etc.) to get comparable results.
Figure 4 shows the comparison on databases of 10k points drawn from the MNIST and Physics
datasets (2.5k points were used as sample queries). We see a marked improvement for the Physics
dataset and a small improvement for the MNIST dataset. We suspect that the learning algorithm
helps substantially for the physics data because the one-dimensional projections are highly nonuniform whereas the MNIST one-dimensional projections are much more uniform.
7
Physics
size rate (fraction of DB)
size rate (fraction of DB)
Tuned
0.25
0.15
0.1
0.05
0
0
MNIST
Standard
0.2
0.05
0.1
0.15
miss rate
0.2
0.2
0.15
0.1
0.05
0
0
0.05
0.1
0.15
miss rate
0.2
Figure 4: Left: Physics dataset. Right: MNIST dataset.
7
Conclusion
The primary contribution of this paper is demonstrating that building a NN search structure can be
fruitfully viewed as a learning problem. We used this framework to develop algorithms that learn
RCSs and KD-trees optimized for a query distribution. Possible future work includes applying the
learning framework to other data structures, though we expect that even stronger results may be
obtained by using this framework to develop a novel data structure from the ground up. On the
theoretical side, margin-based generalization bounds may allow the use of richer classes of data
structures.
Acknowledgments
We are grateful to the NSF for support under grants IIS-0347646 and IIS-0713540. Thanks to Nikhil
Rasiwasia, Sunhyoung Han, and Nuno Vasconcelos for providing the Corel50 data.
References
[1] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic
expected time. ACM Transactions on Mathematical Software, 3(3):209?226, 1977.
[2] S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Wu. An optimal algorithm for approximate
nearest neighbor searching. Journal of the ACM, 45(6):891?923, 1998.
[3] T. Liu, A. W. Moore, A. Gray, and K. Yang. An investigation of practical approximate neighbor algorithms. In Neural Information Processing Systems (NIPS), 2004.
[4] S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. Technical report,
UCSD, 2007.
[5] P. Indyk. Nearest neighbors in high dimensional spaces. In J. E. Goodman and J. O?Rourke, editors,
Handbook of Discrete and Computational Geometry. CRC Press, 2006.
[6] M. T. Orchard. A fast nearest-neighbor search algorithm. In ICASSP, pages 2297?3000, 1991.
[7] E. Vidal. An algorithm for finding nearest neighbours in (approximately) constant average time. Pattern
Recognition Letters, 4:145?157, 1986.
[8] S. Omohundro. Five balltree construction algorithms. Technical report, ICSI, 1989.
[9] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In ICML, 2006.
[10] K. L. Clarkson. Nearest-neighbor searching and metric space dimensions. In Nearest-Neighbor Methods
for Learning and Vision: Theory and Practice, pages 15?59. MIT Press, 2006.
[11] S. Maneewongvatana and D. Mount. The analysis of a probabilistic approach to nearest neighbor searching. In Workshop on Algorithms and Data Structures, 2001.
[12] S. Maneewongvatana and D. Mount. Analysis of approximate nearest neighbor searching with clustered
point sets. In Workshop on Algorithm Engineering and Experimentation (ALENEX), 1999.
[13] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Locality-sensitive hashing scheme
based on p-stable distributions. In SCG 2004, pages 253?262, New York, NY, USA, 2004. ACM Press.
[14] O. Bousquet, S. Boucheron, and G. Lugosi. Theory of classification: a survey of recent advances. ESAIM:
Probability and Statistics, 9:323?375, 2004.
[15] D. Mount and S. Arya. ANN library. http://www.cs.umd.edu/?mount/ANN/.
[16] N. Rasiwasia, P. Moreno, and N. Vasconcelos. Bridging the gap: query by semantic example. IEEE
Transactions on Multimedia, 2007.
8
| 3357 |@word repository:1 version:3 seems:1 stronger:2 norm:1 scg:1 p0:2 q1:5 pick:3 liu:1 series:1 tuned:3 ours:1 outperforms:1 z2:1 comparing:1 beygelzimer:1 si:3 must:3 subsequent:1 partition:6 kdd:3 moreno:1 designed:1 greedy:2 half:1 leaf:2 fewer:1 selected:1 ith:1 core:1 provides:1 node:1 location:5 traverse:1 simpler:1 five:1 mathematical:1 along:4 constructed:1 mayur:1 manner:1 expected:3 indeed:1 p1:2 examine:1 decomposed:1 actual:1 increasing:1 becomes:1 project:1 underlying:5 suffice:1 panel:1 what:3 kind:1 substantially:3 minimizes:1 developed:2 differing:1 finding:2 alenex:1 guarantee:2 xd:1 ro:1 qm:5 demonstrates:1 partitioning:4 bio:1 brute:1 grant:1 appear:1 positive:1 engineering:1 mount:6 ree:3 datar:1 approximately:2 lugosi:1 examined:2 suggests:1 challenging:2 collapse:1 range:5 practical:2 acknowledgment:1 practice:1 silverman:1 xr:2 digit:1 procedure:3 empirical:6 significantly:1 convenient:2 projection:6 word:1 get:2 clever:1 close:3 applying:2 restriction:1 www:1 missing:1 center:2 nicole:1 attention:1 survey:2 focused:1 splitting:3 immediately:1 rule:2 adjusts:1 utilizing:1 hd:1 searching:8 handle:1 coordinate:1 diego:2 construction:2 suppose:4 gm:3 exact:1 programming:2 pt:1 designing:2 trend:1 expensive:1 particularly:1 recognition:1 database:28 qtc:5 binning:1 rij:1 hv:1 worst:2 region:1 icsi:1 principled:1 intuition:1 complexity:1 ui:2 dynamic:2 grateful:1 basis:1 triangle:1 icassp:1 various:2 distinct:1 fast:3 describe:2 kp:1 query:43 quite:2 emerged:1 widely:1 solve:1 richer:1 say:1 nikhil:1 ability:1 statistic:1 itself:2 indyk:2 propose:2 zm:2 uci:5 rapidly:1 achieve:1 competition:1 sizef:17 qr:2 parent:1 convergence:1 cluster:2 ring:1 wider:1 derive:2 help:1 develop:2 nearest:22 eq:4 p2:2 strong:3 c:3 direction:5 everything:1 bin:5 crc:1 require:5 fix:4 generalization:9 clustered:1 investigation:1 exploring:1 hold:2 practically:1 around:1 ground:1 lawrence:1 algorithmic:1 mapping:1 major:1 vary:1 sensitive:5 hope:3 mit:1 always:1 modified:1 rather:2 finkel:1 ax:1 focus:3 improvement:5 experimenting:1 seamlessly:1 contrast:2 greedily:1 dim:1 nn:20 typically:5 eliminate:1 entire:1 unoptimized:1 voluminous:1 interested:2 provably:1 selects:1 classification:1 development:1 animal:6 constrained:1 breakthrough:1 construct:1 vasconcelos:2 piotr:1 manually:1 placing:1 look:2 icml:1 nearly:2 future:2 report:2 fundamentally:1 few:2 randomly:3 neighbour:1 geometry:1 attempt:1 friedman:1 highly:1 devoted:1 implication:1 necessary:1 tree:39 euclidean:1 divide:2 theoretical:3 instance:1 vahab:1 downside:1 cover:1 rxi:1 cost:11 subset:3 uniform:2 fruitfully:1 conducted:1 too:1 answer:1 varies:1 thanks:1 density:1 fundamental:1 probabilistic:2 physic:6 discipline:1 picking:2 quickly:1 containing:2 choose:1 resort:1 american:2 return:6 rasiwasia:2 potential:1 includes:1 kvkp:1 explicitly:1 depends:1 performed:2 later:1 h1:1 doing:1 sup:4 portion:1 option:1 parallel:1 contribution:1 minimize:6 air:1 likewise:3 yield:2 generalize:2 produced:1 iid:4 none:1 classified:1 definition:1 nuno:1 proof:2 dataset:8 popular:3 recall:1 rourke:1 improves:1 dimensionality:2 formalize:1 actually:1 hashing:5 higher:1 improved:1 done:1 though:6 cayton:1 evaluated:1 just:1 langford:1 hopefully:1 reveal:1 gray:1 perhaps:2 bentley:1 building:4 effect:1 usa:1 contain:1 true:4 requiring:1 counterpart:1 assigned:2 boucheron:1 moore:1 semantic:2 eg:1 width:1 criterion:1 theoretic:1 omohundro:1 percent:1 image:6 consideration:1 novel:4 recently:2 empirically:1 corel:1 discussed:1 interpretation:1 refer:2 cup:1 rectilinear:6 rd:19 enjoyed:1 grid:1 lsh:8 stable:4 han:1 similarity:1 impressive:1 etc:1 recent:3 optimizing:1 belongs:1 inequality:1 binary:1 success:3 continue:2 swept:1 seen:1 minimum:1 fortunately:1 additional:2 somewhat:2 employed:2 paradigm:1 ud:8 signal:1 ii:2 multiple:1 full:2 technical:2 faster:1 academic:1 calculation:3 believed:1 long:1 retrieval:8 rcs:8 divided:1 match:1 qi:10 ensuring:1 variant:1 essentially:1 metric:4 expectation:1 vision:1 cell:19 whereas:2 want:2 remarkably:1 interval:1 else:1 median:14 recursing:1 appropriately:1 goodman:1 rest:1 umd:1 suspect:1 db:5 integer:1 structural:1 leverage:1 yang:1 split:22 enough:1 concerned:1 variety:1 zi:1 restrict:1 inner:1 idea:1 tradeoff:3 whether:2 pca:1 bridging:1 clarkson:1 york:1 repeatedly:1 amount:1 http:1 percentage:1 nsf:1 per:2 discrete:1 dasgupta:3 putting:1 nevertheless:2 pb:3 demonstrating:1 drawn:12 vast:1 merely:1 fraction:5 run:2 letter:2 ameliorate:1 place:2 throughout:1 reasonable:2 uild:4 wu:1 missed:1 draw:2 comparable:1 bound:7 hi:1 covertype:1 precisely:1 constraint:1 software:2 bousquet:1 carve:1 u1:8 extremely:1 performing:1 relatively:1 department:2 according:1 orchard:2 kd:33 smaller:1 slightly:2 across:2 kakade:1 projecting:1 restricted:2 previously:2 lcayton:1 operation:1 experimentation:1 vidal:1 apply:1 original:1 denotes:1 remaining:1 include:1 running:3 spill:1 build:4 already:3 quantity:4 degrades:2 costly:1 primary:1 mirrokni:1 surrogate:2 balltree:1 exhibit:2 dp:2 distance:13 separate:1 outer:1 degrade:1 manifold:1 spanning:1 reason:1 providing:1 minimizing:7 ql:2 potentially:2 relate:2 unknown:1 perform:3 datasets:4 arya:3 situation:2 discovered:1 ucsd:3 varied:1 nonuniform:1 arbitrary:1 community:1 cast:1 z1:2 optimized:2 california:2 learned:17 tremendous:2 nearer:1 nip:1 beyond:1 suggested:1 usually:1 pattern:1 appeared:1 built:4 natural:2 force:1 recursion:1 scheme:5 improve:1 esaim:1 library:1 axis:2 prior:1 geometric:1 literature:1 review:1 freund:1 expect:1 bear:4 sublinear:1 querying:1 fiji:1 attuned:1 editor:1 netanyahu:1 pi:8 course:3 guide:1 side:2 allow:1 neighbor:18 fall:1 wide:1 benefit:1 boundary:16 dimension:10 xn:4 evaluating:1 depth:4 computes:1 san:2 projected:1 far:1 transaction:2 approximate:9 handbook:1 conclude:1 xi:3 alternatively:1 search:13 pen:1 why:1 table:2 additionally:1 promising:1 learn:5 excellent:2 necessarily:1 complex:1 domain:1 substituted:1 main:1 child:1 allowed:1 x1:5 ny:1 position:2 wish:2 xl:2 lie:1 down:2 theorem:7 specific:1 showing:2 explored:3 experimented:1 intrinsic:1 consist:1 mnist:5 workshop:2 experiments3:1 subtree:1 margin:1 gap:1 locality:5 led:2 logarithmic:1 simply:2 explore:4 applies:1 determines:1 acm:3 marked:1 viewed:1 ann:3 absence:1 tiger:1 typical:1 infinite:1 uniformly:1 specifically:1 except:1 miss:8 lens:1 sanjoy:1 multimedia:1 experimental:1 select:2 highdimensional:1 immorlica:1 searched:1 support:1 latter:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.